Jan 29 11:04:08 np0005601226 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 29 11:04:08 np0005601226 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 29 11:04:08 np0005601226 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 29 11:04:08 np0005601226 kernel: BIOS-provided physical RAM map:
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 29 11:04:08 np0005601226 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 29 11:04:08 np0005601226 kernel: NX (Execute Disable) protection: active
Jan 29 11:04:08 np0005601226 kernel: APIC: Static calls initialized
Jan 29 11:04:08 np0005601226 kernel: SMBIOS 2.8 present.
Jan 29 11:04:08 np0005601226 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 29 11:04:08 np0005601226 kernel: Hypervisor detected: KVM
Jan 29 11:04:08 np0005601226 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 29 11:04:08 np0005601226 kernel: kvm-clock: using sched offset of 12847091182 cycles
Jan 29 11:04:08 np0005601226 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 29 11:04:08 np0005601226 kernel: tsc: Detected 2799.998 MHz processor
Jan 29 11:04:08 np0005601226 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 29 11:04:08 np0005601226 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 29 11:04:08 np0005601226 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 29 11:04:08 np0005601226 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 29 11:04:08 np0005601226 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 29 11:04:08 np0005601226 kernel: Using GB pages for direct mapping
Jan 29 11:04:08 np0005601226 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 29 11:04:08 np0005601226 kernel: ACPI: Early table checksum verification disabled
Jan 29 11:04:08 np0005601226 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 29 11:04:08 np0005601226 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 11:04:08 np0005601226 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 11:04:08 np0005601226 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 11:04:08 np0005601226 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 29 11:04:08 np0005601226 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 11:04:08 np0005601226 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 11:04:08 np0005601226 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 29 11:04:08 np0005601226 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 29 11:04:08 np0005601226 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 29 11:04:08 np0005601226 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 29 11:04:08 np0005601226 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 29 11:04:08 np0005601226 kernel: No NUMA configuration found
Jan 29 11:04:08 np0005601226 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 29 11:04:08 np0005601226 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 29 11:04:08 np0005601226 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 29 11:04:08 np0005601226 kernel: Zone ranges:
Jan 29 11:04:08 np0005601226 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 29 11:04:08 np0005601226 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 29 11:04:08 np0005601226 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 29 11:04:08 np0005601226 kernel:  Device   empty
Jan 29 11:04:08 np0005601226 kernel: Movable zone start for each node
Jan 29 11:04:08 np0005601226 kernel: Early memory node ranges
Jan 29 11:04:08 np0005601226 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 29 11:04:08 np0005601226 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 29 11:04:08 np0005601226 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 29 11:04:08 np0005601226 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 29 11:04:08 np0005601226 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 29 11:04:08 np0005601226 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 29 11:04:08 np0005601226 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 29 11:04:08 np0005601226 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 29 11:04:08 np0005601226 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 29 11:04:08 np0005601226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 29 11:04:08 np0005601226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 29 11:04:08 np0005601226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 29 11:04:08 np0005601226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 29 11:04:08 np0005601226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 29 11:04:08 np0005601226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 29 11:04:08 np0005601226 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 29 11:04:08 np0005601226 kernel: TSC deadline timer available
Jan 29 11:04:08 np0005601226 kernel: CPU topo: Max. logical packages:   8
Jan 29 11:04:08 np0005601226 kernel: CPU topo: Max. logical dies:       8
Jan 29 11:04:08 np0005601226 kernel: CPU topo: Max. dies per package:   1
Jan 29 11:04:08 np0005601226 kernel: CPU topo: Max. threads per core:   1
Jan 29 11:04:08 np0005601226 kernel: CPU topo: Num. cores per package:     1
Jan 29 11:04:08 np0005601226 kernel: CPU topo: Num. threads per package:   1
Jan 29 11:04:08 np0005601226 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 29 11:04:08 np0005601226 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 29 11:04:08 np0005601226 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 29 11:04:08 np0005601226 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 29 11:04:08 np0005601226 kernel: Booting paravirtualized kernel on KVM
Jan 29 11:04:08 np0005601226 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 29 11:04:08 np0005601226 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 29 11:04:08 np0005601226 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 29 11:04:08 np0005601226 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 29 11:04:08 np0005601226 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 29 11:04:08 np0005601226 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 29 11:04:08 np0005601226 kernel: random: crng init done
Jan 29 11:04:08 np0005601226 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: Fallback order for Node 0: 0 
Jan 29 11:04:08 np0005601226 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 29 11:04:08 np0005601226 kernel: Policy zone: Normal
Jan 29 11:04:08 np0005601226 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 29 11:04:08 np0005601226 kernel: software IO TLB: area num 8.
Jan 29 11:04:08 np0005601226 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 29 11:04:08 np0005601226 kernel: ftrace: allocating 49438 entries in 194 pages
Jan 29 11:04:08 np0005601226 kernel: ftrace: allocated 194 pages with 3 groups
Jan 29 11:04:08 np0005601226 kernel: Dynamic Preempt: voluntary
Jan 29 11:04:08 np0005601226 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 29 11:04:08 np0005601226 kernel: rcu: #011RCU event tracing is enabled.
Jan 29 11:04:08 np0005601226 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 29 11:04:08 np0005601226 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 29 11:04:08 np0005601226 kernel: #011Rude variant of Tasks RCU enabled.
Jan 29 11:04:08 np0005601226 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 29 11:04:08 np0005601226 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 29 11:04:08 np0005601226 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 29 11:04:08 np0005601226 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 29 11:04:08 np0005601226 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 29 11:04:08 np0005601226 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 29 11:04:08 np0005601226 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 29 11:04:08 np0005601226 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 29 11:04:08 np0005601226 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 29 11:04:08 np0005601226 kernel: Console: colour VGA+ 80x25
Jan 29 11:04:08 np0005601226 kernel: printk: console [ttyS0] enabled
Jan 29 11:04:08 np0005601226 kernel: ACPI: Core revision 20230331
Jan 29 11:04:08 np0005601226 kernel: APIC: Switch to symmetric I/O mode setup
Jan 29 11:04:08 np0005601226 kernel: x2apic enabled
Jan 29 11:04:08 np0005601226 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 29 11:04:08 np0005601226 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 29 11:04:08 np0005601226 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 29 11:04:08 np0005601226 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 29 11:04:08 np0005601226 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 29 11:04:08 np0005601226 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 29 11:04:08 np0005601226 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 29 11:04:08 np0005601226 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 29 11:04:08 np0005601226 kernel: Spectre V2 : Mitigation: Retpolines
Jan 29 11:04:08 np0005601226 kernel: RETBleed: Mitigation: untrained return thunk
Jan 29 11:04:08 np0005601226 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 29 11:04:08 np0005601226 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 29 11:04:08 np0005601226 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 29 11:04:08 np0005601226 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 29 11:04:08 np0005601226 kernel: active return thunk: retbleed_return_thunk
Jan 29 11:04:08 np0005601226 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 29 11:04:08 np0005601226 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 29 11:04:08 np0005601226 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 29 11:04:08 np0005601226 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 29 11:04:08 np0005601226 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 29 11:04:08 np0005601226 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 29 11:04:08 np0005601226 kernel: Freeing SMP alternatives memory: 40K
Jan 29 11:04:08 np0005601226 kernel: pid_max: default: 32768 minimum: 301
Jan 29 11:04:08 np0005601226 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 29 11:04:08 np0005601226 kernel: landlock: Up and running.
Jan 29 11:04:08 np0005601226 kernel: Yama: becoming mindful.
Jan 29 11:04:08 np0005601226 kernel: SELinux:  Initializing.
Jan 29 11:04:08 np0005601226 kernel: LSM support for eBPF active
Jan 29 11:04:08 np0005601226 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 29 11:04:08 np0005601226 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 29 11:04:08 np0005601226 kernel: ... version:                0
Jan 29 11:04:08 np0005601226 kernel: ... bit width:              48
Jan 29 11:04:08 np0005601226 kernel: ... generic registers:      6
Jan 29 11:04:08 np0005601226 kernel: ... value mask:             0000ffffffffffff
Jan 29 11:04:08 np0005601226 kernel: ... max period:             00007fffffffffff
Jan 29 11:04:08 np0005601226 kernel: ... fixed-purpose events:   0
Jan 29 11:04:08 np0005601226 kernel: ... event mask:             000000000000003f
Jan 29 11:04:08 np0005601226 kernel: signal: max sigframe size: 1776
Jan 29 11:04:08 np0005601226 kernel: rcu: Hierarchical SRCU implementation.
Jan 29 11:04:08 np0005601226 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 29 11:04:08 np0005601226 kernel: smp: Bringing up secondary CPUs ...
Jan 29 11:04:08 np0005601226 kernel: smpboot: x86: Booting SMP configuration:
Jan 29 11:04:08 np0005601226 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 29 11:04:08 np0005601226 kernel: smp: Brought up 1 node, 8 CPUs
Jan 29 11:04:08 np0005601226 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 29 11:04:08 np0005601226 kernel: node 0 deferred pages initialised in 15ms
Jan 29 11:04:08 np0005601226 kernel: Memory: 7763692K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618400K reserved, 0K cma-reserved)
Jan 29 11:04:08 np0005601226 kernel: devtmpfs: initialized
Jan 29 11:04:08 np0005601226 kernel: x86/mm: Memory block size: 128MB
Jan 29 11:04:08 np0005601226 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 29 11:04:08 np0005601226 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 29 11:04:08 np0005601226 kernel: pinctrl core: initialized pinctrl subsystem
Jan 29 11:04:08 np0005601226 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 29 11:04:08 np0005601226 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 29 11:04:08 np0005601226 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 29 11:04:08 np0005601226 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 29 11:04:08 np0005601226 kernel: audit: initializing netlink subsys (disabled)
Jan 29 11:04:08 np0005601226 kernel: audit: type=2000 audit(1769702647.220:1): state=initialized audit_enabled=0 res=1
Jan 29 11:04:08 np0005601226 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 29 11:04:08 np0005601226 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 29 11:04:08 np0005601226 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 29 11:04:08 np0005601226 kernel: cpuidle: using governor menu
Jan 29 11:04:08 np0005601226 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 29 11:04:08 np0005601226 kernel: PCI: Using configuration type 1 for base access
Jan 29 11:04:08 np0005601226 kernel: PCI: Using configuration type 1 for extended access
Jan 29 11:04:08 np0005601226 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 29 11:04:08 np0005601226 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 29 11:04:08 np0005601226 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 29 11:04:08 np0005601226 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 29 11:04:08 np0005601226 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 29 11:04:08 np0005601226 kernel: Demotion targets for Node 0: null
Jan 29 11:04:08 np0005601226 kernel: cryptd: max_cpu_qlen set to 1000
Jan 29 11:04:08 np0005601226 kernel: ACPI: Added _OSI(Module Device)
Jan 29 11:04:08 np0005601226 kernel: ACPI: Added _OSI(Processor Device)
Jan 29 11:04:08 np0005601226 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 29 11:04:08 np0005601226 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 29 11:04:08 np0005601226 kernel: ACPI: Interpreter enabled
Jan 29 11:04:08 np0005601226 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 29 11:04:08 np0005601226 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 29 11:04:08 np0005601226 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 29 11:04:08 np0005601226 kernel: PCI: Using E820 reservations for host bridge windows
Jan 29 11:04:08 np0005601226 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 29 11:04:08 np0005601226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 29 11:04:08 np0005601226 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [3] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [4] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [5] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [6] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [7] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [8] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [9] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [10] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [11] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [12] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [13] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [14] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [15] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [16] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [17] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [18] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [19] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [20] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [21] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [22] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [23] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [24] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [25] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [26] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [27] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [28] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [29] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [30] registered
Jan 29 11:04:08 np0005601226 kernel: acpiphp: Slot [31] registered
Jan 29 11:04:08 np0005601226 kernel: PCI host bridge to bus 0000:00
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 29 11:04:08 np0005601226 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 29 11:04:08 np0005601226 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 29 11:04:08 np0005601226 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 29 11:04:08 np0005601226 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 29 11:04:08 np0005601226 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 29 11:04:08 np0005601226 kernel: iommu: Default domain type: Translated
Jan 29 11:04:08 np0005601226 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 29 11:04:08 np0005601226 kernel: SCSI subsystem initialized
Jan 29 11:04:08 np0005601226 kernel: ACPI: bus type USB registered
Jan 29 11:04:08 np0005601226 kernel: usbcore: registered new interface driver usbfs
Jan 29 11:04:08 np0005601226 kernel: usbcore: registered new interface driver hub
Jan 29 11:04:08 np0005601226 kernel: usbcore: registered new device driver usb
Jan 29 11:04:08 np0005601226 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 29 11:04:08 np0005601226 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 29 11:04:08 np0005601226 kernel: PTP clock support registered
Jan 29 11:04:08 np0005601226 kernel: EDAC MC: Ver: 3.0.0
Jan 29 11:04:08 np0005601226 kernel: NetLabel: Initializing
Jan 29 11:04:08 np0005601226 kernel: NetLabel:  domain hash size = 128
Jan 29 11:04:08 np0005601226 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 29 11:04:08 np0005601226 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 29 11:04:08 np0005601226 kernel: PCI: Using ACPI for IRQ routing
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 29 11:04:08 np0005601226 kernel: vgaarb: loaded
Jan 29 11:04:08 np0005601226 kernel: clocksource: Switched to clocksource kvm-clock
Jan 29 11:04:08 np0005601226 kernel: VFS: Disk quotas dquot_6.6.0
Jan 29 11:04:08 np0005601226 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 29 11:04:08 np0005601226 kernel: pnp: PnP ACPI init
Jan 29 11:04:08 np0005601226 kernel: pnp: PnP ACPI: found 5 devices
Jan 29 11:04:08 np0005601226 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 29 11:04:08 np0005601226 kernel: NET: Registered PF_INET protocol family
Jan 29 11:04:08 np0005601226 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 29 11:04:08 np0005601226 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 29 11:04:08 np0005601226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 29 11:04:08 np0005601226 kernel: NET: Registered PF_XDP protocol family
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 29 11:04:08 np0005601226 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 29 11:04:08 np0005601226 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 29 11:04:08 np0005601226 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 22969 usecs
Jan 29 11:04:08 np0005601226 kernel: PCI: CLS 0 bytes, default 64
Jan 29 11:04:08 np0005601226 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 29 11:04:08 np0005601226 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 29 11:04:08 np0005601226 kernel: Trying to unpack rootfs image as initramfs...
Jan 29 11:04:08 np0005601226 kernel: ACPI: bus type thunderbolt registered
Jan 29 11:04:08 np0005601226 kernel: Initialise system trusted keyrings
Jan 29 11:04:08 np0005601226 kernel: Key type blacklist registered
Jan 29 11:04:08 np0005601226 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 29 11:04:08 np0005601226 kernel: zbud: loaded
Jan 29 11:04:08 np0005601226 kernel: integrity: Platform Keyring initialized
Jan 29 11:04:08 np0005601226 kernel: integrity: Machine keyring initialized
Jan 29 11:04:08 np0005601226 kernel: Freeing initrd memory: 88000K
Jan 29 11:04:08 np0005601226 kernel: NET: Registered PF_ALG protocol family
Jan 29 11:04:08 np0005601226 kernel: xor: automatically using best checksumming function   avx       
Jan 29 11:04:08 np0005601226 kernel: Key type asymmetric registered
Jan 29 11:04:08 np0005601226 kernel: Asymmetric key parser 'x509' registered
Jan 29 11:04:08 np0005601226 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 29 11:04:08 np0005601226 kernel: io scheduler mq-deadline registered
Jan 29 11:04:08 np0005601226 kernel: io scheduler kyber registered
Jan 29 11:04:08 np0005601226 kernel: io scheduler bfq registered
Jan 29 11:04:08 np0005601226 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 29 11:04:08 np0005601226 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 29 11:04:08 np0005601226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 29 11:04:08 np0005601226 kernel: ACPI: button: Power Button [PWRF]
Jan 29 11:04:08 np0005601226 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 29 11:04:08 np0005601226 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 29 11:04:08 np0005601226 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 29 11:04:08 np0005601226 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 29 11:04:08 np0005601226 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 29 11:04:08 np0005601226 kernel: Non-volatile memory driver v1.3
Jan 29 11:04:08 np0005601226 kernel: rdac: device handler registered
Jan 29 11:04:08 np0005601226 kernel: hp_sw: device handler registered
Jan 29 11:04:08 np0005601226 kernel: emc: device handler registered
Jan 29 11:04:08 np0005601226 kernel: alua: device handler registered
Jan 29 11:04:08 np0005601226 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 29 11:04:08 np0005601226 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 29 11:04:08 np0005601226 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 29 11:04:08 np0005601226 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 29 11:04:08 np0005601226 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 29 11:04:08 np0005601226 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 29 11:04:08 np0005601226 kernel: usb usb1: Product: UHCI Host Controller
Jan 29 11:04:08 np0005601226 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 29 11:04:08 np0005601226 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 29 11:04:08 np0005601226 kernel: hub 1-0:1.0: USB hub found
Jan 29 11:04:08 np0005601226 kernel: hub 1-0:1.0: 2 ports detected
Jan 29 11:04:08 np0005601226 kernel: usbcore: registered new interface driver usbserial_generic
Jan 29 11:04:08 np0005601226 kernel: usbserial: USB Serial support registered for generic
Jan 29 11:04:08 np0005601226 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 29 11:04:08 np0005601226 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 29 11:04:08 np0005601226 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 29 11:04:08 np0005601226 kernel: mousedev: PS/2 mouse device common for all mice
Jan 29 11:04:08 np0005601226 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 29 11:04:08 np0005601226 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 29 11:04:08 np0005601226 kernel: rtc_cmos 00:04: registered as rtc0
Jan 29 11:04:08 np0005601226 kernel: rtc_cmos 00:04: setting system clock to 2026-01-29T16:04:07 UTC (1769702647)
Jan 29 11:04:08 np0005601226 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 29 11:04:08 np0005601226 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 29 11:04:08 np0005601226 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 29 11:04:08 np0005601226 kernel: usbcore: registered new interface driver usbhid
Jan 29 11:04:08 np0005601226 kernel: usbhid: USB HID core driver
Jan 29 11:04:08 np0005601226 kernel: drop_monitor: Initializing network drop monitor service
Jan 29 11:04:08 np0005601226 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 29 11:04:08 np0005601226 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 29 11:04:08 np0005601226 kernel: Initializing XFRM netlink socket
Jan 29 11:04:08 np0005601226 kernel: NET: Registered PF_INET6 protocol family
Jan 29 11:04:08 np0005601226 kernel: Segment Routing with IPv6
Jan 29 11:04:08 np0005601226 kernel: NET: Registered PF_PACKET protocol family
Jan 29 11:04:08 np0005601226 kernel: mpls_gso: MPLS GSO support
Jan 29 11:04:08 np0005601226 kernel: IPI shorthand broadcast: enabled
Jan 29 11:04:08 np0005601226 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 29 11:04:08 np0005601226 kernel: AES CTR mode by8 optimization enabled
Jan 29 11:04:08 np0005601226 kernel: sched_clock: Marking stable (1094001900, 147858705)->(1343769810, -101909205)
Jan 29 11:04:08 np0005601226 kernel: registered taskstats version 1
Jan 29 11:04:08 np0005601226 kernel: Loading compiled-in X.509 certificates
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 29 11:04:08 np0005601226 kernel: Demotion targets for Node 0: null
Jan 29 11:04:08 np0005601226 kernel: page_owner is disabled
Jan 29 11:04:08 np0005601226 kernel: Key type .fscrypt registered
Jan 29 11:04:08 np0005601226 kernel: Key type fscrypt-provisioning registered
Jan 29 11:04:08 np0005601226 kernel: Key type big_key registered
Jan 29 11:04:08 np0005601226 kernel: Key type encrypted registered
Jan 29 11:04:08 np0005601226 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 29 11:04:08 np0005601226 kernel: Loading compiled-in module X.509 certificates
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 29 11:04:08 np0005601226 kernel: ima: Allocated hash algorithm: sha256
Jan 29 11:04:08 np0005601226 kernel: ima: No architecture policies found
Jan 29 11:04:08 np0005601226 kernel: evm: Initialising EVM extended attributes:
Jan 29 11:04:08 np0005601226 kernel: evm: security.selinux
Jan 29 11:04:08 np0005601226 kernel: evm: security.SMACK64 (disabled)
Jan 29 11:04:08 np0005601226 kernel: evm: security.SMACK64EXEC (disabled)
Jan 29 11:04:08 np0005601226 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 29 11:04:08 np0005601226 kernel: evm: security.SMACK64MMAP (disabled)
Jan 29 11:04:08 np0005601226 kernel: evm: security.apparmor (disabled)
Jan 29 11:04:08 np0005601226 kernel: evm: security.ima
Jan 29 11:04:08 np0005601226 kernel: evm: security.capability
Jan 29 11:04:08 np0005601226 kernel: evm: HMAC attrs: 0x1
Jan 29 11:04:08 np0005601226 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 29 11:04:08 np0005601226 kernel: Running certificate verification RSA selftest
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 29 11:04:08 np0005601226 kernel: Running certificate verification ECDSA selftest
Jan 29 11:04:08 np0005601226 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 29 11:04:08 np0005601226 kernel: clk: Disabling unused clocks
Jan 29 11:04:08 np0005601226 kernel: Freeing unused decrypted memory: 2028K
Jan 29 11:04:08 np0005601226 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 29 11:04:08 np0005601226 kernel: Write protecting the kernel read-only data: 30720k
Jan 29 11:04:08 np0005601226 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 29 11:04:08 np0005601226 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 29 11:04:08 np0005601226 kernel: Run /init as init process
Jan 29 11:04:08 np0005601226 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 29 11:04:08 np0005601226 systemd: Detected virtualization kvm.
Jan 29 11:04:08 np0005601226 systemd: Detected architecture x86-64.
Jan 29 11:04:08 np0005601226 systemd: Running in initrd.
Jan 29 11:04:08 np0005601226 systemd: No hostname configured, using default hostname.
Jan 29 11:04:08 np0005601226 systemd: Hostname set to <localhost>.
Jan 29 11:04:08 np0005601226 systemd: Initializing machine ID from VM UUID.
Jan 29 11:04:08 np0005601226 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 29 11:04:08 np0005601226 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 29 11:04:08 np0005601226 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 29 11:04:08 np0005601226 kernel: usb 1-1: Manufacturer: QEMU
Jan 29 11:04:08 np0005601226 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 29 11:04:08 np0005601226 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 29 11:04:08 np0005601226 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 29 11:04:08 np0005601226 systemd: Queued start job for default target Initrd Default Target.
Jan 29 11:04:08 np0005601226 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 29 11:04:08 np0005601226 systemd: Reached target Local Encrypted Volumes.
Jan 29 11:04:08 np0005601226 systemd: Reached target Initrd /usr File System.
Jan 29 11:04:08 np0005601226 systemd: Reached target Local File Systems.
Jan 29 11:04:08 np0005601226 systemd: Reached target Path Units.
Jan 29 11:04:08 np0005601226 systemd: Reached target Slice Units.
Jan 29 11:04:08 np0005601226 systemd: Reached target Swaps.
Jan 29 11:04:08 np0005601226 systemd: Reached target Timer Units.
Jan 29 11:04:08 np0005601226 systemd: Listening on D-Bus System Message Bus Socket.
Jan 29 11:04:08 np0005601226 systemd: Listening on Journal Socket (/dev/log).
Jan 29 11:04:08 np0005601226 systemd: Listening on Journal Socket.
Jan 29 11:04:08 np0005601226 systemd: Listening on udev Control Socket.
Jan 29 11:04:08 np0005601226 systemd: Listening on udev Kernel Socket.
Jan 29 11:04:08 np0005601226 systemd: Reached target Socket Units.
Jan 29 11:04:08 np0005601226 systemd: Starting Create List of Static Device Nodes...
Jan 29 11:04:08 np0005601226 systemd: Starting Journal Service...
Jan 29 11:04:08 np0005601226 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 29 11:04:08 np0005601226 systemd: Starting Apply Kernel Variables...
Jan 29 11:04:08 np0005601226 systemd: Starting Create System Users...
Jan 29 11:04:08 np0005601226 systemd: Starting Setup Virtual Console...
Jan 29 11:04:08 np0005601226 systemd: Finished Create List of Static Device Nodes.
Jan 29 11:04:08 np0005601226 systemd: Finished Apply Kernel Variables.
Jan 29 11:04:08 np0005601226 systemd: Finished Create System Users.
Jan 29 11:04:08 np0005601226 systemd-journald[307]: Journal started
Jan 29 11:04:08 np0005601226 systemd-journald[307]: Runtime Journal (/run/log/journal/3d58286e1b14486e8cad0bdb2d2969c4) is 8.0M, max 153.6M, 145.6M free.
Jan 29 11:04:08 np0005601226 systemd-sysusers[312]: Creating group 'users' with GID 100.
Jan 29 11:04:08 np0005601226 systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Jan 29 11:04:08 np0005601226 systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 29 11:04:08 np0005601226 systemd: Started Journal Service.
Jan 29 11:04:08 np0005601226 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 29 11:04:08 np0005601226 systemd[1]: Starting Create Volatile Files and Directories...
Jan 29 11:04:08 np0005601226 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 29 11:04:08 np0005601226 systemd[1]: Finished Setup Virtual Console.
Jan 29 11:04:08 np0005601226 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 29 11:04:08 np0005601226 systemd[1]: Starting dracut cmdline hook...
Jan 29 11:04:08 np0005601226 systemd[1]: Finished Create Volatile Files and Directories.
Jan 29 11:04:08 np0005601226 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Jan 29 11:04:08 np0005601226 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 29 11:04:08 np0005601226 systemd[1]: Finished dracut cmdline hook.
Jan 29 11:04:08 np0005601226 systemd[1]: Starting dracut pre-udev hook...
Jan 29 11:04:08 np0005601226 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 29 11:04:08 np0005601226 kernel: device-mapper: uevent: version 1.0.3
Jan 29 11:04:08 np0005601226 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 29 11:04:08 np0005601226 kernel: RPC: Registered named UNIX socket transport module.
Jan 29 11:04:08 np0005601226 kernel: RPC: Registered udp transport module.
Jan 29 11:04:08 np0005601226 kernel: RPC: Registered tcp transport module.
Jan 29 11:04:08 np0005601226 kernel: RPC: Registered tcp-with-tls transport module.
Jan 29 11:04:08 np0005601226 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 29 11:04:08 np0005601226 rpc.statd[444]: Version 2.5.4 starting
Jan 29 11:04:08 np0005601226 rpc.statd[444]: Initializing NSM state
Jan 29 11:04:08 np0005601226 rpc.idmapd[449]: Setting log level to 0
Jan 29 11:04:08 np0005601226 systemd[1]: Finished dracut pre-udev hook.
Jan 29 11:04:08 np0005601226 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 29 11:04:08 np0005601226 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Jan 29 11:04:08 np0005601226 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 29 11:04:08 np0005601226 systemd[1]: Starting dracut pre-trigger hook...
Jan 29 11:04:08 np0005601226 systemd[1]: Finished dracut pre-trigger hook.
Jan 29 11:04:08 np0005601226 systemd[1]: Starting Coldplug All udev Devices...
Jan 29 11:04:08 np0005601226 systemd[1]: Created slice Slice /system/modprobe.
Jan 29 11:04:08 np0005601226 systemd[1]: Starting Load Kernel Module configfs...
Jan 29 11:04:08 np0005601226 systemd[1]: Finished Coldplug All udev Devices.
Jan 29 11:04:08 np0005601226 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 11:04:08 np0005601226 systemd[1]: Finished Load Kernel Module configfs.
Jan 29 11:04:08 np0005601226 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 29 11:04:08 np0005601226 systemd[1]: Reached target Network.
Jan 29 11:04:08 np0005601226 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 29 11:04:08 np0005601226 systemd[1]: Starting dracut initqueue hook...
Jan 29 11:04:08 np0005601226 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 29 11:04:08 np0005601226 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 29 11:04:08 np0005601226 kernel: vda: vda1
Jan 29 11:04:08 np0005601226 systemd-udevd[495]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 11:04:08 np0005601226 kernel: scsi host0: ata_piix
Jan 29 11:04:08 np0005601226 kernel: scsi host1: ata_piix
Jan 29 11:04:08 np0005601226 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 29 11:04:08 np0005601226 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 29 11:04:08 np0005601226 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 29 11:04:08 np0005601226 systemd[1]: Reached target Initrd Root Device.
Jan 29 11:04:08 np0005601226 kernel: ata1: found unknown device (class 0)
Jan 29 11:04:08 np0005601226 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 29 11:04:08 np0005601226 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 29 11:04:09 np0005601226 systemd[1]: Mounting Kernel Configuration File System...
Jan 29 11:04:09 np0005601226 systemd[1]: Mounted Kernel Configuration File System.
Jan 29 11:04:09 np0005601226 systemd[1]: Reached target System Initialization.
Jan 29 11:04:09 np0005601226 systemd[1]: Reached target Basic System.
Jan 29 11:04:09 np0005601226 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 29 11:04:09 np0005601226 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 29 11:04:09 np0005601226 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 29 11:04:09 np0005601226 systemd[1]: Finished dracut initqueue hook.
Jan 29 11:04:09 np0005601226 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 29 11:04:09 np0005601226 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 29 11:04:09 np0005601226 systemd[1]: Reached target Remote File Systems.
Jan 29 11:04:09 np0005601226 systemd[1]: Starting dracut pre-mount hook...
Jan 29 11:04:09 np0005601226 systemd[1]: Finished dracut pre-mount hook.
Jan 29 11:04:09 np0005601226 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 29 11:04:09 np0005601226 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 29 11:04:09 np0005601226 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 29 11:04:09 np0005601226 systemd[1]: Mounting /sysroot...
Jan 29 11:04:09 np0005601226 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 29 11:04:09 np0005601226 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 29 11:04:10 np0005601226 kernel: XFS (vda1): Ending clean mount
Jan 29 11:04:10 np0005601226 systemd[1]: Mounted /sysroot.
Jan 29 11:04:10 np0005601226 systemd[1]: Reached target Initrd Root File System.
Jan 29 11:04:10 np0005601226 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 29 11:04:10 np0005601226 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 29 11:04:10 np0005601226 systemd[1]: Reached target Initrd File Systems.
Jan 29 11:04:10 np0005601226 systemd[1]: Reached target Initrd Default Target.
Jan 29 11:04:10 np0005601226 systemd[1]: Starting dracut mount hook...
Jan 29 11:04:10 np0005601226 systemd[1]: Finished dracut mount hook.
Jan 29 11:04:10 np0005601226 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 29 11:04:10 np0005601226 rpc.idmapd[449]: exiting on signal 15
Jan 29 11:04:10 np0005601226 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 29 11:04:10 np0005601226 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Network.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Timer Units.
Jan 29 11:04:10 np0005601226 systemd[1]: dbus.socket: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 29 11:04:10 np0005601226 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Initrd Default Target.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Basic System.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Initrd Root Device.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Initrd /usr File System.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Path Units.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Remote File Systems.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Slice Units.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Socket Units.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target System Initialization.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Local File Systems.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Swaps.
Jan 29 11:04:10 np0005601226 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped dracut mount hook.
Jan 29 11:04:10 np0005601226 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped dracut pre-mount hook.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 29 11:04:10 np0005601226 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped dracut initqueue hook.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Apply Kernel Variables.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Coldplug All udev Devices.
Jan 29 11:04:10 np0005601226 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped dracut pre-trigger hook.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Setup Virtual Console.
Jan 29 11:04:10 np0005601226 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Closed udev Control Socket.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Closed udev Kernel Socket.
Jan 29 11:04:10 np0005601226 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped dracut pre-udev hook.
Jan 29 11:04:10 np0005601226 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped dracut cmdline hook.
Jan 29 11:04:10 np0005601226 systemd[1]: Starting Cleanup udev Database...
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 29 11:04:10 np0005601226 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 29 11:04:10 np0005601226 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Stopped Create System Users.
Jan 29 11:04:10 np0005601226 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 29 11:04:10 np0005601226 systemd[1]: Finished Cleanup udev Database.
Jan 29 11:04:10 np0005601226 systemd[1]: Reached target Switch Root.
Jan 29 11:04:10 np0005601226 systemd[1]: Starting Switch Root...
Jan 29 11:04:10 np0005601226 systemd[1]: Switching root.
Jan 29 11:04:10 np0005601226 systemd-journald[307]: Journal stopped
Jan 29 11:04:12 np0005601226 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 29 11:04:12 np0005601226 kernel: audit: type=1404 audit(1769702650.889:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 29 11:04:12 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 11:04:12 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 11:04:12 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 11:04:12 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 11:04:12 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 11:04:12 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 11:04:12 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 11:04:12 np0005601226 kernel: audit: type=1403 audit(1769702651.091:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 29 11:04:12 np0005601226 systemd: Successfully loaded SELinux policy in 207.237ms.
Jan 29 11:04:12 np0005601226 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.689ms.
Jan 29 11:04:12 np0005601226 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 29 11:04:12 np0005601226 systemd: Detected virtualization kvm.
Jan 29 11:04:12 np0005601226 systemd: Detected architecture x86-64.
Jan 29 11:04:12 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:04:12 np0005601226 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 29 11:04:12 np0005601226 systemd: Stopped Switch Root.
Jan 29 11:04:12 np0005601226 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 29 11:04:12 np0005601226 systemd: Created slice Slice /system/getty.
Jan 29 11:04:12 np0005601226 systemd: Created slice Slice /system/serial-getty.
Jan 29 11:04:12 np0005601226 systemd: Created slice Slice /system/sshd-keygen.
Jan 29 11:04:12 np0005601226 systemd: Created slice User and Session Slice.
Jan 29 11:04:12 np0005601226 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 29 11:04:12 np0005601226 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 29 11:04:12 np0005601226 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 29 11:04:12 np0005601226 systemd: Reached target Local Encrypted Volumes.
Jan 29 11:04:12 np0005601226 systemd: Stopped target Switch Root.
Jan 29 11:04:12 np0005601226 systemd: Stopped target Initrd File Systems.
Jan 29 11:04:12 np0005601226 systemd: Stopped target Initrd Root File System.
Jan 29 11:04:12 np0005601226 systemd: Reached target Local Integrity Protected Volumes.
Jan 29 11:04:12 np0005601226 systemd: Reached target Path Units.
Jan 29 11:04:12 np0005601226 systemd: Reached target rpc_pipefs.target.
Jan 29 11:04:12 np0005601226 systemd: Reached target Slice Units.
Jan 29 11:04:12 np0005601226 systemd: Reached target Swaps.
Jan 29 11:04:12 np0005601226 systemd: Reached target Local Verity Protected Volumes.
Jan 29 11:04:12 np0005601226 systemd: Listening on RPCbind Server Activation Socket.
Jan 29 11:04:12 np0005601226 systemd: Reached target RPC Port Mapper.
Jan 29 11:04:12 np0005601226 systemd: Listening on Process Core Dump Socket.
Jan 29 11:04:12 np0005601226 systemd: Listening on initctl Compatibility Named Pipe.
Jan 29 11:04:12 np0005601226 systemd: Listening on udev Control Socket.
Jan 29 11:04:12 np0005601226 systemd: Listening on udev Kernel Socket.
Jan 29 11:04:12 np0005601226 systemd: Mounting Huge Pages File System...
Jan 29 11:04:12 np0005601226 systemd: Mounting POSIX Message Queue File System...
Jan 29 11:04:12 np0005601226 systemd: Mounting Kernel Debug File System...
Jan 29 11:04:12 np0005601226 systemd: Mounting Kernel Trace File System...
Jan 29 11:04:12 np0005601226 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 29 11:04:12 np0005601226 systemd: Starting Create List of Static Device Nodes...
Jan 29 11:04:12 np0005601226 systemd: Starting Load Kernel Module configfs...
Jan 29 11:04:12 np0005601226 systemd: Starting Load Kernel Module drm...
Jan 29 11:04:12 np0005601226 systemd: Starting Load Kernel Module efi_pstore...
Jan 29 11:04:12 np0005601226 systemd: Starting Load Kernel Module fuse...
Jan 29 11:04:12 np0005601226 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 29 11:04:12 np0005601226 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 29 11:04:12 np0005601226 systemd: Stopped File System Check on Root Device.
Jan 29 11:04:12 np0005601226 systemd: Stopped Journal Service.
Jan 29 11:04:12 np0005601226 systemd: Starting Journal Service...
Jan 29 11:04:12 np0005601226 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 29 11:04:12 np0005601226 systemd: Starting Generate network units from Kernel command line...
Jan 29 11:04:12 np0005601226 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 29 11:04:12 np0005601226 systemd: Starting Remount Root and Kernel File Systems...
Jan 29 11:04:12 np0005601226 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 11:04:12 np0005601226 systemd: Starting Apply Kernel Variables...
Jan 29 11:04:12 np0005601226 kernel: fuse: init (API version 7.37)
Jan 29 11:04:12 np0005601226 systemd: Starting Coldplug All udev Devices...
Jan 29 11:04:12 np0005601226 systemd-journald[678]: Journal started
Jan 29 11:04:12 np0005601226 systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 29 11:04:12 np0005601226 systemd[1]: Queued start job for default target Multi-User System.
Jan 29 11:04:12 np0005601226 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 29 11:04:12 np0005601226 systemd: Started Journal Service.
Jan 29 11:04:12 np0005601226 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 29 11:04:12 np0005601226 systemd[1]: Mounted Huge Pages File System.
Jan 29 11:04:12 np0005601226 systemd[1]: Mounted POSIX Message Queue File System.
Jan 29 11:04:12 np0005601226 systemd[1]: Mounted Kernel Debug File System.
Jan 29 11:04:12 np0005601226 systemd[1]: Mounted Kernel Trace File System.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Create List of Static Device Nodes.
Jan 29 11:04:12 np0005601226 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Load Kernel Module configfs.
Jan 29 11:04:12 np0005601226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 29 11:04:12 np0005601226 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Load Kernel Module fuse.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Generate network units from Kernel command line.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Apply Kernel Variables.
Jan 29 11:04:12 np0005601226 kernel: ACPI: bus type drm_connector registered
Jan 29 11:04:12 np0005601226 systemd[1]: Mounting FUSE Control File System...
Jan 29 11:04:12 np0005601226 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Rebuild Hardware Database...
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 29 11:04:12 np0005601226 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Load/Save OS Random Seed...
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Create System Users...
Jan 29 11:04:12 np0005601226 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Load Kernel Module drm.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Coldplug All udev Devices.
Jan 29 11:04:12 np0005601226 systemd[1]: Mounted FUSE Control File System.
Jan 29 11:04:12 np0005601226 systemd-journald[678]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 29 11:04:12 np0005601226 systemd-journald[678]: Received client request to flush runtime journal.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Create System Users.
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Load/Save OS Random Seed.
Jan 29 11:04:12 np0005601226 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 29 11:04:12 np0005601226 systemd[1]: Reached target Preparation for Local File Systems.
Jan 29 11:04:12 np0005601226 systemd[1]: Reached target Local File Systems.
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 29 11:04:12 np0005601226 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 29 11:04:12 np0005601226 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 11:04:12 np0005601226 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Automatic Boot Loader Update...
Jan 29 11:04:12 np0005601226 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Create Volatile Files and Directories...
Jan 29 11:04:12 np0005601226 bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Automatic Boot Loader Update.
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Create Volatile Files and Directories.
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Security Auditing Service...
Jan 29 11:04:12 np0005601226 systemd[1]: Starting RPC Bind...
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Rebuild Journal Catalog...
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Rebuild Journal Catalog.
Jan 29 11:04:12 np0005601226 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 29 11:04:12 np0005601226 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 29 11:04:12 np0005601226 systemd[1]: Started RPC Bind.
Jan 29 11:04:12 np0005601226 augenrules[708]: /sbin/augenrules: No change
Jan 29 11:04:12 np0005601226 augenrules[723]: No rules
Jan 29 11:04:12 np0005601226 augenrules[723]: enabled 1
Jan 29 11:04:12 np0005601226 augenrules[723]: failure 1
Jan 29 11:04:12 np0005601226 augenrules[723]: pid 703
Jan 29 11:04:12 np0005601226 augenrules[723]: rate_limit 0
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_limit 8192
Jan 29 11:04:12 np0005601226 augenrules[723]: lost 0
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog 4
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_wait_time 60000
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_wait_time_actual 0
Jan 29 11:04:12 np0005601226 augenrules[723]: enabled 1
Jan 29 11:04:12 np0005601226 augenrules[723]: failure 1
Jan 29 11:04:12 np0005601226 augenrules[723]: pid 703
Jan 29 11:04:12 np0005601226 augenrules[723]: rate_limit 0
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_limit 8192
Jan 29 11:04:12 np0005601226 augenrules[723]: lost 0
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog 4
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_wait_time 60000
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_wait_time_actual 0
Jan 29 11:04:12 np0005601226 augenrules[723]: enabled 1
Jan 29 11:04:12 np0005601226 augenrules[723]: failure 1
Jan 29 11:04:12 np0005601226 augenrules[723]: pid 703
Jan 29 11:04:12 np0005601226 augenrules[723]: rate_limit 0
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_limit 8192
Jan 29 11:04:12 np0005601226 augenrules[723]: lost 0
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog 1
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_wait_time 60000
Jan 29 11:04:12 np0005601226 augenrules[723]: backlog_wait_time_actual 0
Jan 29 11:04:12 np0005601226 systemd[1]: Started Security Auditing Service.
Jan 29 11:04:12 np0005601226 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 29 11:04:12 np0005601226 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 29 11:04:13 np0005601226 systemd[1]: Finished Rebuild Hardware Database.
Jan 29 11:04:13 np0005601226 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 29 11:04:13 np0005601226 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Jan 29 11:04:13 np0005601226 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 29 11:04:13 np0005601226 systemd[1]: Starting Load Kernel Module configfs...
Jan 29 11:04:13 np0005601226 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 29 11:04:13 np0005601226 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 11:04:13 np0005601226 systemd[1]: Finished Load Kernel Module configfs.
Jan 29 11:04:13 np0005601226 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 29 11:04:13 np0005601226 systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 11:04:13 np0005601226 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 29 11:04:13 np0005601226 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 29 11:04:13 np0005601226 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 29 11:04:13 np0005601226 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 29 11:04:13 np0005601226 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 29 11:04:13 np0005601226 kernel: Console: switching to colour dummy device 80x25
Jan 29 11:04:13 np0005601226 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 29 11:04:13 np0005601226 kernel: [drm] features: -context_init
Jan 29 11:04:13 np0005601226 kernel: [drm] number of scanouts: 1
Jan 29 11:04:13 np0005601226 kernel: [drm] number of cap sets: 0
Jan 29 11:04:13 np0005601226 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 29 11:04:13 np0005601226 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 29 11:04:13 np0005601226 kernel: Console: switching to colour frame buffer device 128x48
Jan 29 11:04:13 np0005601226 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 29 11:04:14 np0005601226 kernel: kvm_amd: TSC scaling supported
Jan 29 11:04:14 np0005601226 kernel: kvm_amd: Nested Virtualization enabled
Jan 29 11:04:14 np0005601226 kernel: kvm_amd: Nested Paging enabled
Jan 29 11:04:14 np0005601226 kernel: kvm_amd: LBR virtualization supported
Jan 29 11:04:14 np0005601226 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 29 11:04:14 np0005601226 systemd[1]: Starting Update is Completed...
Jan 29 11:04:14 np0005601226 systemd[1]: Finished Update is Completed.
Jan 29 11:04:14 np0005601226 systemd[1]: Reached target System Initialization.
Jan 29 11:04:14 np0005601226 systemd[1]: Started dnf makecache --timer.
Jan 29 11:04:14 np0005601226 systemd[1]: Started Daily rotation of log files.
Jan 29 11:04:14 np0005601226 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 29 11:04:14 np0005601226 systemd[1]: Reached target Timer Units.
Jan 29 11:04:14 np0005601226 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 29 11:04:14 np0005601226 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 29 11:04:14 np0005601226 systemd[1]: Reached target Socket Units.
Jan 29 11:04:14 np0005601226 systemd[1]: Starting D-Bus System Message Bus...
Jan 29 11:04:14 np0005601226 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 29 11:04:14 np0005601226 systemd[1]: Started D-Bus System Message Bus.
Jan 29 11:04:14 np0005601226 systemd[1]: Reached target Basic System.
Jan 29 11:04:14 np0005601226 dbus-broker-lau[813]: Ready
Jan 29 11:04:14 np0005601226 systemd[1]: Starting NTP client/server...
Jan 29 11:04:14 np0005601226 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 29 11:04:14 np0005601226 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 29 11:04:14 np0005601226 systemd[1]: Starting IPv4 firewall with iptables...
Jan 29 11:04:14 np0005601226 systemd[1]: Started irqbalance daemon.
Jan 29 11:04:14 np0005601226 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 29 11:04:14 np0005601226 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 11:04:14 np0005601226 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 11:04:14 np0005601226 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 11:04:14 np0005601226 systemd[1]: Reached target sshd-keygen.target.
Jan 29 11:04:14 np0005601226 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 29 11:04:14 np0005601226 systemd[1]: Reached target User and Group Name Lookups.
Jan 29 11:04:14 np0005601226 systemd[1]: Starting User Login Management...
Jan 29 11:04:14 np0005601226 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 29 11:04:14 np0005601226 systemd-logind[823]: New seat seat0.
Jan 29 11:04:14 np0005601226 systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 29 11:04:14 np0005601226 systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 29 11:04:14 np0005601226 systemd[1]: Started User Login Management.
Jan 29 11:04:14 np0005601226 chronyd[832]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 29 11:04:14 np0005601226 chronyd[832]: Loaded 0 symmetric keys
Jan 29 11:04:14 np0005601226 chronyd[832]: Using right/UTC timezone to obtain leap second data
Jan 29 11:04:14 np0005601226 chronyd[832]: Loaded seccomp filter (level 2)
Jan 29 11:04:14 np0005601226 systemd[1]: Started NTP client/server.
Jan 29 11:04:14 np0005601226 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 29 11:04:14 np0005601226 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 29 11:04:14 np0005601226 iptables.init[818]: iptables: Applying firewall rules: [  OK  ]
Jan 29 11:04:14 np0005601226 systemd[1]: Finished IPv4 firewall with iptables.
Jan 29 11:04:17 np0005601226 cloud-init[841]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 29 Jan 2026 16:04:17 +0000. Up 10.86 seconds.
Jan 29 11:04:17 np0005601226 systemd[1]: run-cloud\x2dinit-tmp-tmplv_ll67l.mount: Deactivated successfully.
Jan 29 11:04:17 np0005601226 systemd[1]: Starting Hostname Service...
Jan 29 11:04:17 np0005601226 systemd[1]: Started Hostname Service.
Jan 29 11:04:17 np0005601226 systemd-hostnamed[856]: Hostname set to <np0005601226.novalocal> (static)
Jan 29 11:04:18 np0005601226 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 29 11:04:18 np0005601226 systemd[1]: Reached target Preparation for Network.
Jan 29 11:04:18 np0005601226 systemd[1]: Starting Network Manager...
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.4896] NetworkManager (version 1.54.3-2.el9) is starting... (boot:9485f3a0-b546-449b-a1de-1a80f8dff8e7)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.4900] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5620] manager[0x55f72951e000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5656] hostname: hostname: using hostnamed
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5657] hostname: static hostname changed from (none) to "np0005601226.novalocal"
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5661] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5832] manager[0x55f72951e000]: rfkill: Wi-Fi hardware radio set enabled
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5832] manager[0x55f72951e000]: rfkill: WWAN hardware radio set enabled
Jan 29 11:04:18 np0005601226 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5927] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5928] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5928] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5928] manager: Networking is enabled by state file
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.5930] settings: Loaded settings plugin: keyfile (internal)
Jan 29 11:04:18 np0005601226 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.6336] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.6846] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.6861] dhcp: init: Using DHCP client 'internal'
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.6867] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.6882] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7127] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7256] device (lo): Activation: starting connection 'lo' (fb19d968-2132-4ea2-ac78-a40c265fabbe)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7271] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7275] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:04:18 np0005601226 systemd[1]: Started Network Manager.
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7317] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 29 11:04:18 np0005601226 systemd[1]: Reached target Network.
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7372] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7375] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7377] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7379] device (eth0): carrier: link connected
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7390] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7396] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7403] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7407] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7409] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7411] manager: NetworkManager state is now CONNECTING
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7413] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7418] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7421] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7472] dhcp4 (eth0): state changed new lease, address=38.129.56.71
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7477] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7491] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7522] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7526] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7531] device (lo): Activation: successful, device activated.
Jan 29 11:04:18 np0005601226 systemd[1]: Starting Network Manager Wait Online...
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7538] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7539] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7543] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7546] device (eth0): Activation: successful, device activated.
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7551] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 29 11:04:18 np0005601226 NetworkManager[860]: <info>  [1769702658.7554] manager: startup complete
Jan 29 11:04:18 np0005601226 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 29 11:04:18 np0005601226 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 11:04:18 np0005601226 systemd[1]: Finished Network Manager Wait Online.
Jan 29 11:04:18 np0005601226 systemd[1]: Starting Cloud-init: Network Stage...
Jan 29 11:04:18 np0005601226 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 29 11:04:18 np0005601226 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 29 11:04:18 np0005601226 systemd[1]: Reached target NFS client services.
Jan 29 11:04:18 np0005601226 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 29 11:04:18 np0005601226 systemd[1]: Reached target Remote File Systems.
Jan 29 11:04:18 np0005601226 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 29 11:04:19 np0005601226 cloud-init[922]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 29 Jan 2026 16:04:19 +0000. Up 12.60 seconds.
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |  eth0  | True |         38.129.56.71         | 255.255.255.0 | global | fa:16:3e:a8:fc:a4 |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fea8:fca4/64 |       .       |  link  | fa:16:3e:a8:fc:a4 |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 29 11:04:19 np0005601226 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 29 11:04:23 np0005601226 chronyd[832]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Jan 29 11:04:23 np0005601226 chronyd[832]: System clock TAI offset set to 37 seconds
Jan 29 11:04:25 np0005601226 irqbalance[819]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 29 11:04:25 np0005601226 irqbalance[819]: IRQ 25 affinity is now unmanaged
Jan 29 11:04:25 np0005601226 irqbalance[819]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 29 11:04:25 np0005601226 irqbalance[819]: IRQ 31 affinity is now unmanaged
Jan 29 11:04:25 np0005601226 irqbalance[819]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 29 11:04:25 np0005601226 irqbalance[819]: IRQ 28 affinity is now unmanaged
Jan 29 11:04:25 np0005601226 irqbalance[819]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 29 11:04:25 np0005601226 irqbalance[819]: IRQ 32 affinity is now unmanaged
Jan 29 11:04:25 np0005601226 irqbalance[819]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 29 11:04:25 np0005601226 irqbalance[819]: IRQ 30 affinity is now unmanaged
Jan 29 11:04:25 np0005601226 irqbalance[819]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 29 11:04:25 np0005601226 irqbalance[819]: IRQ 29 affinity is now unmanaged
Jan 29 11:04:29 np0005601226 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 11:04:35 np0005601226 cloud-init[922]: Generating public/private rsa key pair.
Jan 29 11:04:35 np0005601226 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 29 11:04:35 np0005601226 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 29 11:04:35 np0005601226 cloud-init[922]: The key fingerprint is:
Jan 29 11:04:35 np0005601226 cloud-init[922]: SHA256:0iCJVJmfMkAe3t2nURJX+l+ZYjrpCsRTkxa+/fEczV8 root@np0005601226.novalocal
Jan 29 11:04:35 np0005601226 cloud-init[922]: The key's randomart image is:
Jan 29 11:04:35 np0005601226 cloud-init[922]: +---[RSA 3072]----+
Jan 29 11:04:35 np0005601226 cloud-init[922]: | .+..o  ooo..    |
Jan 29 11:04:35 np0005601226 cloud-init[922]: | +.+oo ..+o.     |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |  +.+.o.o*o      |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |    o.+oo++.   .+|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |     o.+S. ..+ +E|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |      ...   =.=.+|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |       .   + ..o.|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |        . . .    |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |         ...     |
Jan 29 11:04:35 np0005601226 cloud-init[922]: +----[SHA256]-----+
Jan 29 11:04:35 np0005601226 cloud-init[922]: Generating public/private ecdsa key pair.
Jan 29 11:04:35 np0005601226 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 29 11:04:35 np0005601226 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 29 11:04:35 np0005601226 cloud-init[922]: The key fingerprint is:
Jan 29 11:04:35 np0005601226 cloud-init[922]: SHA256:WBPYL3K3lYCSJVGokr0TCQwhuJ7BJAYBbiOwpvt7UFo root@np0005601226.novalocal
Jan 29 11:04:35 np0005601226 cloud-init[922]: The key's randomart image is:
Jan 29 11:04:35 np0005601226 cloud-init[922]: +---[ECDSA 256]---+
Jan 29 11:04:35 np0005601226 cloud-init[922]: |%=    oO+.       |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |*+o   =.o..      |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |B* + o .o. . .   |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |*o+ E .oo.o o    |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |o o= o.oSo o     |
Jan 29 11:04:35 np0005601226 cloud-init[922]: | +o o     .      |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |.  . .           |
Jan 29 11:04:35 np0005601226 cloud-init[922]: | .  .            |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |  oo             |
Jan 29 11:04:35 np0005601226 cloud-init[922]: +----[SHA256]-----+
Jan 29 11:04:35 np0005601226 cloud-init[922]: Generating public/private ed25519 key pair.
Jan 29 11:04:35 np0005601226 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 29 11:04:35 np0005601226 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 29 11:04:35 np0005601226 cloud-init[922]: The key fingerprint is:
Jan 29 11:04:35 np0005601226 cloud-init[922]: SHA256:L7/9po9/HFNV8WqGRYyK3IM4UhhGVw0JD60m5wrkb4c root@np0005601226.novalocal
Jan 29 11:04:35 np0005601226 cloud-init[922]: The key's randomart image is:
Jan 29 11:04:35 np0005601226 cloud-init[922]: +--[ED25519 256]--+
Jan 29 11:04:35 np0005601226 cloud-init[922]: |   .+o++o+   o..+|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |   ....oo . ... o|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |     . +.+ .  . o|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |  . o * + +  o ..|
Jan 29 11:04:35 np0005601226 cloud-init[922]: | o   * .S  .. + .|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |  o   .  .   o o |
Jan 29 11:04:35 np0005601226 cloud-init[922]: |   o o  . .    .o|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |    E .  o . .. o|
Jan 29 11:04:35 np0005601226 cloud-init[922]: |   . .    o.+*+. |
Jan 29 11:04:35 np0005601226 cloud-init[922]: +----[SHA256]-----+
Jan 29 11:04:35 np0005601226 systemd[1]: Finished Cloud-init: Network Stage.
Jan 29 11:04:35 np0005601226 systemd[1]: Reached target Cloud-config availability.
Jan 29 11:04:35 np0005601226 systemd[1]: Reached target Network is Online.
Jan 29 11:04:35 np0005601226 systemd[1]: Starting Cloud-init: Config Stage...
Jan 29 11:04:35 np0005601226 systemd[1]: Starting Crash recovery kernel arming...
Jan 29 11:04:35 np0005601226 systemd[1]: Starting Notify NFS peers of a restart...
Jan 29 11:04:35 np0005601226 systemd[1]: Starting System Logging Service...
Jan 29 11:04:35 np0005601226 sm-notify[1006]: Version 2.5.4 starting
Jan 29 11:04:35 np0005601226 systemd[1]: Starting OpenSSH server daemon...
Jan 29 11:04:35 np0005601226 systemd[1]: Starting Permit User Sessions...
Jan 29 11:04:35 np0005601226 systemd[1]: Started Notify NFS peers of a restart.
Jan 29 11:04:35 np0005601226 systemd[1]: Started OpenSSH server daemon.
Jan 29 11:04:35 np0005601226 rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 29 11:04:35 np0005601226 rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 29 11:04:35 np0005601226 systemd[1]: Started System Logging Service.
Jan 29 11:04:35 np0005601226 systemd[1]: Finished Permit User Sessions.
Jan 29 11:04:35 np0005601226 systemd[1]: Started Command Scheduler.
Jan 29 11:04:35 np0005601226 systemd[1]: Started Getty on tty1.
Jan 29 11:04:35 np0005601226 systemd[1]: Started Serial Getty on ttyS0.
Jan 29 11:04:35 np0005601226 systemd[1]: Reached target Login Prompts.
Jan 29 11:04:35 np0005601226 systemd[1]: Reached target Multi-User System.
Jan 29 11:04:35 np0005601226 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 29 11:04:35 np0005601226 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 29 11:04:35 np0005601226 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 29 11:04:35 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 11:04:35 np0005601226 cloud-init[1069]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 29 Jan 2026 16:04:35 +0000. Up 29.01 seconds.
Jan 29 11:04:35 np0005601226 kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Jan 29 11:04:35 np0005601226 kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 29 11:04:35 np0005601226 systemd[1]: Finished Cloud-init: Config Stage.
Jan 29 11:04:35 np0005601226 systemd[1]: Starting Cloud-init: Final Stage...
Jan 29 11:04:35 np0005601226 cloud-init[1234]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 29 Jan 2026 16:04:35 +0000. Up 29.39 seconds.
Jan 29 11:04:35 np0005601226 cloud-init[1262]: #############################################################
Jan 29 11:04:35 np0005601226 cloud-init[1265]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 29 11:04:35 np0005601226 cloud-init[1269]: 256 SHA256:WBPYL3K3lYCSJVGokr0TCQwhuJ7BJAYBbiOwpvt7UFo root@np0005601226.novalocal (ECDSA)
Jan 29 11:04:35 np0005601226 cloud-init[1273]: 256 SHA256:L7/9po9/HFNV8WqGRYyK3IM4UhhGVw0JD60m5wrkb4c root@np0005601226.novalocal (ED25519)
Jan 29 11:04:35 np0005601226 cloud-init[1277]: 3072 SHA256:0iCJVJmfMkAe3t2nURJX+l+ZYjrpCsRTkxa+/fEczV8 root@np0005601226.novalocal (RSA)
Jan 29 11:04:35 np0005601226 cloud-init[1278]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 29 11:04:35 np0005601226 cloud-init[1279]: #############################################################
Jan 29 11:04:36 np0005601226 cloud-init[1234]: Cloud-init v. 24.4-8.el9 finished at Thu, 29 Jan 2026 16:04:36 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 29.57 seconds
Jan 29 11:04:36 np0005601226 dracut[1281]: dracut-057-102.git20250818.el9
Jan 29 11:04:36 np0005601226 systemd[1]: Finished Cloud-init: Final Stage.
Jan 29 11:04:36 np0005601226 systemd[1]: Reached target Cloud-init target.
Jan 29 11:04:36 np0005601226 dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: memstrack is not available
Jan 29 11:04:37 np0005601226 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 29 11:04:37 np0005601226 dracut[1286]: memstrack is not available
Jan 29 11:04:37 np0005601226 dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 29 11:04:38 np0005601226 dracut[1286]: *** Including module: systemd ***
Jan 29 11:04:38 np0005601226 dracut[1286]: *** Including module: fips ***
Jan 29 11:04:38 np0005601226 dracut[1286]: *** Including module: systemd-initrd ***
Jan 29 11:04:38 np0005601226 dracut[1286]: *** Including module: i18n ***
Jan 29 11:04:38 np0005601226 dracut[1286]: *** Including module: drm ***
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: prefixdevname ***
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: kernel-modules ***
Jan 29 11:04:39 np0005601226 kernel: block vda: the capability attribute has been deprecated.
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: kernel-modules-extra ***
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: qemu ***
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: fstab-sys ***
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: rootfs-block ***
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: terminfo ***
Jan 29 11:04:39 np0005601226 dracut[1286]: *** Including module: udev-rules ***
Jan 29 11:04:40 np0005601226 dracut[1286]: Skipping udev rule: 91-permissions.rules
Jan 29 11:04:40 np0005601226 dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 29 11:04:40 np0005601226 dracut[1286]: *** Including module: virtiofs ***
Jan 29 11:04:40 np0005601226 dracut[1286]: *** Including module: dracut-systemd ***
Jan 29 11:04:40 np0005601226 dracut[1286]: *** Including module: usrmount ***
Jan 29 11:04:40 np0005601226 dracut[1286]: *** Including module: base ***
Jan 29 11:04:40 np0005601226 dracut[1286]: *** Including module: fs-lib ***
Jan 29 11:04:40 np0005601226 dracut[1286]: *** Including module: kdumpbase ***
Jan 29 11:04:40 np0005601226 dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 29 11:04:40 np0005601226 dracut[1286]:  microcode_ctl module: mangling fw_dir
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel" is ignored
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 29 11:04:40 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 29 11:04:41 np0005601226 dracut[1286]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 29 11:04:41 np0005601226 dracut[1286]: *** Including module: openssl ***
Jan 29 11:04:41 np0005601226 dracut[1286]: *** Including module: shutdown ***
Jan 29 11:04:41 np0005601226 dracut[1286]: *** Including module: squash ***
Jan 29 11:04:41 np0005601226 dracut[1286]: *** Including modules done ***
Jan 29 11:04:41 np0005601226 dracut[1286]: *** Installing kernel module dependencies ***
Jan 29 11:04:42 np0005601226 dracut[1286]: *** Installing kernel module dependencies done ***
Jan 29 11:04:42 np0005601226 dracut[1286]: *** Resolving executable dependencies ***
Jan 29 11:04:47 np0005601226 dracut[1286]: *** Resolving executable dependencies done ***
Jan 29 11:04:47 np0005601226 dracut[1286]: *** Generating early-microcode cpio image ***
Jan 29 11:04:47 np0005601226 dracut[1286]: *** Store current command line parameters ***
Jan 29 11:04:47 np0005601226 dracut[1286]: Stored kernel commandline:
Jan 29 11:04:47 np0005601226 dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Jan 29 11:04:48 np0005601226 dracut[1286]: *** Install squash loader ***
Jan 29 11:04:48 np0005601226 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 11:04:49 np0005601226 dracut[1286]: *** Squashing the files inside the initramfs ***
Jan 29 11:04:50 np0005601226 dracut[1286]: *** Squashing the files inside the initramfs done ***
Jan 29 11:04:50 np0005601226 dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 29 11:04:50 np0005601226 dracut[1286]: *** Hardlinking files ***
Jan 29 11:04:50 np0005601226 dracut[1286]: *** Hardlinking files done ***
Jan 29 11:04:52 np0005601226 dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 29 11:04:53 np0005601226 kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Jan 29 11:04:53 np0005601226 kdumpctl[1020]: kdump: Starting kdump: [OK]
Jan 29 11:04:53 np0005601226 systemd[1]: Finished Crash recovery kernel arming.
Jan 29 11:04:53 np0005601226 systemd[1]: Startup finished in 1.378s (kernel) + 3.034s (initrd) + 42.422s (userspace) = 46.836s.
Jan 29 11:05:30 np0005601226 chronyd[832]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Jan 29 11:08:07 np0005601226 systemd[1]: Created slice User Slice of UID 1000.
Jan 29 11:08:07 np0005601226 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 29 11:08:07 np0005601226 systemd-logind[823]: New session 1 of user zuul.
Jan 29 11:08:07 np0005601226 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 29 11:08:07 np0005601226 systemd[1]: Starting User Manager for UID 1000...
Jan 29 11:08:07 np0005601226 systemd[4309]: Queued start job for default target Main User Target.
Jan 29 11:08:07 np0005601226 systemd[4309]: Created slice User Application Slice.
Jan 29 11:08:07 np0005601226 systemd[4309]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 29 11:08:07 np0005601226 systemd[4309]: Started Daily Cleanup of User's Temporary Directories.
Jan 29 11:08:07 np0005601226 systemd[4309]: Reached target Paths.
Jan 29 11:08:07 np0005601226 systemd[4309]: Reached target Timers.
Jan 29 11:08:07 np0005601226 systemd[4309]: Starting D-Bus User Message Bus Socket...
Jan 29 11:08:07 np0005601226 systemd[4309]: Starting Create User's Volatile Files and Directories...
Jan 29 11:08:07 np0005601226 systemd[4309]: Listening on D-Bus User Message Bus Socket.
Jan 29 11:08:07 np0005601226 systemd[4309]: Finished Create User's Volatile Files and Directories.
Jan 29 11:08:07 np0005601226 systemd[4309]: Reached target Sockets.
Jan 29 11:08:07 np0005601226 systemd[4309]: Reached target Basic System.
Jan 29 11:08:07 np0005601226 systemd[4309]: Reached target Main User Target.
Jan 29 11:08:07 np0005601226 systemd[4309]: Startup finished in 112ms.
Jan 29 11:08:07 np0005601226 systemd[1]: Started User Manager for UID 1000.
Jan 29 11:08:07 np0005601226 systemd[1]: Started Session 1 of User zuul.
Jan 29 11:08:07 np0005601226 python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:08:10 np0005601226 python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:08:17 np0005601226 python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:08:18 np0005601226 python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 29 11:08:20 np0005601226 python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRhQFmox+XS+vfViDSGKWvMnteoS3bDJfrKWBRHcur2bFctNpo7+knzbFUkisNWhrewD6d5akB8ulG5eg8fzLElIsbQB98jsYEThDX+Mb6CJxrf+CLhZ+clieCfRPAbsIUZ6VpWN3dAnYZSMIp7z66Na5oY1CLwhORplnbCVCvVALDBuDap0kQuCdkzpzfYVNEW+WinkbyJaHgfmRcJedzRUM9HfhLFuDnHlpWp01Dc68LSrFBuMEp8FWbInSE9pGLHKUWCka7Wj5T4UqFSWg7lck//DnflBXi8jhIrTZyshWUNqDd3p4pyYuU+/6puXu2v83thK8yWvQtioeZGV7rJ/oYzb13qmBZiHbsO4XAgi6NMZR4zOiENngJM4KK3PWQ3qea7EwfzzuK/h3KpVusnIjwEjSZtc8po2r+DE3H8a9YfaRt4M8HfKc3jH/o6qC34vK97PnZ1Xamb4mexFgfFkX1x36xLyXSaR1FV2HpniL8RbFp27idGQbYrBo0uYM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:20 np0005601226 python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:20 np0005601226 python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:08:21 np0005601226 python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769702900.712768-207-68473404572201/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=f1363653c6674caeb6e2c4ad00328f29_id_rsa follow=False checksum=51e6770445994d451d4f27283703ee9c5de843c8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:21 np0005601226 python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:08:22 np0005601226 python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769702901.6557093-240-200732097912672/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=f1363653c6674caeb6e2c4ad00328f29_id_rsa.pub follow=False checksum=4381a0323d2596d5e940361311e70d8abcdb4b97 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:23 np0005601226 python3[4979]: ansible-ping Invoked with data=pong
Jan 29 11:08:24 np0005601226 python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:08:26 np0005601226 python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 29 11:08:27 np0005601226 python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:27 np0005601226 python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:27 np0005601226 python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:28 np0005601226 python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:28 np0005601226 python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:28 np0005601226 python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:30 np0005601226 python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:30 np0005601226 python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:08:31 np0005601226 python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769702910.3868635-21-225825285944179/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:31 np0005601226 python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:32 np0005601226 python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:32 np0005601226 python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:32 np0005601226 python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:32 np0005601226 python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:33 np0005601226 python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:33 np0005601226 python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:33 np0005601226 python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:33 np0005601226 python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:34 np0005601226 python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:34 np0005601226 python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:34 np0005601226 python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:34 np0005601226 python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:35 np0005601226 python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:35 np0005601226 python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:35 np0005601226 python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:36 np0005601226 python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:36 np0005601226 python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:36 np0005601226 python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:36 np0005601226 python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:37 np0005601226 python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:37 np0005601226 python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:37 np0005601226 python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:37 np0005601226 python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:38 np0005601226 python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:38 np0005601226 python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:08:41 np0005601226 python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 29 11:08:41 np0005601226 systemd[1]: Starting Time & Date Service...
Jan 29 11:08:41 np0005601226 systemd[1]: Started Time & Date Service.
Jan 29 11:08:41 np0005601226 systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 29 11:08:43 np0005601226 python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:43 np0005601226 python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:08:44 np0005601226 python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769702923.6208146-153-34940815249747/source _original_basename=tmp4z35ewxx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:44 np0005601226 python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:08:45 np0005601226 python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769702924.5631468-183-30069348452016/source _original_basename=tmpnsmz158u follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:45 np0005601226 irqbalance[819]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 29 11:08:45 np0005601226 irqbalance[819]: IRQ 27 affinity is now unmanaged
Jan 29 11:08:45 np0005601226 python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:08:46 np0005601226 python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769702925.7378263-231-9155618078245/source _original_basename=tmp6hz88gx1 follow=False checksum=4b3be67b03a160fc19bcb402371b90837a2dd7fe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:47 np0005601226 python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:08:47 np0005601226 python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:08:47 np0005601226 python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:08:47 np0005601226 python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769702927.4215958-273-173910780605439/source _original_basename=tmpy9nqdkoe follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:08:48 np0005601226 python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-4970-6544-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:08:49 np0005601226 python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-4970-6544-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 29 11:08:50 np0005601226 python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:09:07 np0005601226 python3[6949]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:09:12 np0005601226 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 29 11:09:42 np0005601226 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 29 11:09:42 np0005601226 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3691] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 29 11:09:42 np0005601226 systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3808] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3830] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3834] device (eth1): carrier: link connected
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3835] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3840] policy: auto-activating connection 'Wired connection 1' (68a2636e-95dd-355f-8ced-e2552f46817a)
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3843] device (eth1): Activation: starting connection 'Wired connection 1' (68a2636e-95dd-355f-8ced-e2552f46817a)
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3844] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3846] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3850] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:09:42 np0005601226 NetworkManager[860]: <info>  [1769702982.3854] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:09:43 np0005601226 python3[6980]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-31cc-05bf-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:09:53 np0005601226 python3[7060]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:09:53 np0005601226 python3[7133]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769702992.8728585-102-98495527753933/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=33756627ea1b634f05472b1b3f5663f2deb53632 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:09:54 np0005601226 python3[7183]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:09:54 np0005601226 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 29 11:09:54 np0005601226 systemd[1]: Stopped Network Manager Wait Online.
Jan 29 11:09:54 np0005601226 systemd[1]: Stopping Network Manager Wait Online...
Jan 29 11:09:54 np0005601226 systemd[1]: Stopping Network Manager...
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.2719] caught SIGTERM, shutting down normally.
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.2726] dhcp4 (eth0): canceled DHCP transaction
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.2727] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.2727] dhcp4 (eth0): state changed no lease
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.2729] manager: NetworkManager state is now CONNECTING
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.2833] dhcp4 (eth1): canceled DHCP transaction
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.2833] dhcp4 (eth1): state changed no lease
Jan 29 11:09:54 np0005601226 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 11:09:54 np0005601226 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 11:09:54 np0005601226 NetworkManager[860]: <info>  [1769702994.3404] exiting (success)
Jan 29 11:09:54 np0005601226 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 29 11:09:54 np0005601226 systemd[1]: Stopped Network Manager.
Jan 29 11:09:54 np0005601226 systemd[1]: NetworkManager.service: Consumed 2.550s CPU time, 10.2M memory peak.
Jan 29 11:09:54 np0005601226 systemd[1]: Starting Network Manager...
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.3927] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:9485f3a0-b546-449b-a1de-1a80f8dff8e7)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.3928] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.3965] manager[0x55855c3af000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 29 11:09:54 np0005601226 systemd[1]: Starting Hostname Service...
Jan 29 11:09:54 np0005601226 systemd[1]: Started Hostname Service.
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4520] hostname: hostname: using hostnamed
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4522] hostname: static hostname changed from (none) to "np0005601226.novalocal"
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4526] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4530] manager[0x55855c3af000]: rfkill: Wi-Fi hardware radio set enabled
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4530] manager[0x55855c3af000]: rfkill: WWAN hardware radio set enabled
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4552] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4552] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4553] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4553] manager: Networking is enabled by state file
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4555] settings: Loaded settings plugin: keyfile (internal)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4559] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4577] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4584] dhcp: init: Using DHCP client 'internal'
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4587] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4590] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4593] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4598] device (lo): Activation: starting connection 'lo' (fb19d968-2132-4ea2-ac78-a40c265fabbe)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4601] device (eth0): carrier: link connected
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4604] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4608] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4608] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4612] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4617] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4620] device (eth1): carrier: link connected
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4623] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4627] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (68a2636e-95dd-355f-8ced-e2552f46817a) (indicated)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4627] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4630] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4634] device (eth1): Activation: starting connection 'Wired connection 1' (68a2636e-95dd-355f-8ced-e2552f46817a)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4638] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 29 11:09:54 np0005601226 systemd[1]: Started Network Manager.
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4641] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4643] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4644] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4645] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4647] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4648] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4650] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4652] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4657] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4659] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4666] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4668] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4684] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4686] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4690] device (lo): Activation: successful, device activated.
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4696] dhcp4 (eth0): state changed new lease, address=38.129.56.71
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.4702] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 29 11:09:54 np0005601226 systemd[1]: Starting Network Manager Wait Online...
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.6408] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.6454] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.6455] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.6459] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.6461] device (eth0): Activation: successful, device activated.
Jan 29 11:09:54 np0005601226 NetworkManager[7200]: <info>  [1769702994.6466] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 29 11:09:54 np0005601226 python3[7250]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-31cc-05bf-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:10:04 np0005601226 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 11:10:14 np0005601226 systemd[4309]: Starting Mark boot as successful...
Jan 29 11:10:14 np0005601226 systemd[4309]: Finished Mark boot as successful.
Jan 29 11:10:24 np0005601226 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4426] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 11:10:39 np0005601226 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 11:10:39 np0005601226 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4685] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4687] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4690] device (eth1): Activation: successful, device activated.
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4694] manager: startup complete
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4697] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <warn>  [1769703039.4701] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4706] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 29 11:10:39 np0005601226 systemd[1]: Finished Network Manager Wait Online.
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4812] dhcp4 (eth1): canceled DHCP transaction
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4813] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4813] dhcp4 (eth1): state changed no lease
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4825] policy: auto-activating connection 'ci-private-network' (56464cd3-98aa-5bfb-ab19-69dd3436ca20)
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4828] device (eth1): Activation: starting connection 'ci-private-network' (56464cd3-98aa-5bfb-ab19-69dd3436ca20)
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4829] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4832] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4838] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.4845] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.7054] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.7056] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:10:39 np0005601226 NetworkManager[7200]: <info>  [1769703039.7064] device (eth1): Activation: successful, device activated.
Jan 29 11:10:49 np0005601226 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 11:10:54 np0005601226 systemd-logind[823]: Session 1 logged out. Waiting for processes to exit.
Jan 29 11:10:58 np0005601226 systemd-logind[823]: New session 3 of user zuul.
Jan 29 11:10:58 np0005601226 systemd[1]: Started Session 3 of User zuul.
Jan 29 11:10:58 np0005601226 python3[7381]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:10:58 np0005601226 python3[7454]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769703058.342175-267-202875021186477/source _original_basename=tmp9v_lp0hr follow=False checksum=94f1c22810a144ef867d1706aa44047822345882 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:11:00 np0005601226 systemd[1]: session-3.scope: Deactivated successfully.
Jan 29 11:11:00 np0005601226 systemd-logind[823]: Session 3 logged out. Waiting for processes to exit.
Jan 29 11:11:00 np0005601226 systemd-logind[823]: Removed session 3.
Jan 29 11:13:14 np0005601226 systemd[4309]: Created slice User Background Tasks Slice.
Jan 29 11:13:14 np0005601226 systemd[4309]: Starting Cleanup of User's Temporary Files and Directories...
Jan 29 11:13:14 np0005601226 systemd[4309]: Finished Cleanup of User's Temporary Files and Directories.
Jan 29 11:16:40 np0005601226 systemd-logind[823]: New session 4 of user zuul.
Jan 29 11:16:40 np0005601226 systemd[1]: Started Session 4 of User zuul.
Jan 29 11:16:41 np0005601226 python3[7512]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-982a-419f-00000000215f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:16:41 np0005601226 python3[7541]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:16:41 np0005601226 python3[7567]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:16:42 np0005601226 python3[7593]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:16:42 np0005601226 python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:16:42 np0005601226 python3[7645]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:16:43 np0005601226 python3[7723]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:16:43 np0005601226 python3[7796]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769703403.2477746-490-176117073190526/source _original_basename=tmppjdks77e follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:16:44 np0005601226 python3[7846]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 11:16:44 np0005601226 systemd[1]: Reloading.
Jan 29 11:16:44 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:16:46 np0005601226 python3[7902]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 29 11:16:46 np0005601226 python3[7928]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:16:47 np0005601226 python3[7956]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:16:47 np0005601226 python3[7984]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:16:47 np0005601226 python3[8012]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:16:48 np0005601226 python3[8039]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-982a-419f-000000002166-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:16:48 np0005601226 python3[8069]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:16:50 np0005601226 systemd[1]: session-4.scope: Deactivated successfully.
Jan 29 11:16:50 np0005601226 systemd[1]: session-4.scope: Consumed 3.464s CPU time.
Jan 29 11:16:50 np0005601226 systemd-logind[823]: Session 4 logged out. Waiting for processes to exit.
Jan 29 11:16:50 np0005601226 systemd-logind[823]: Removed session 4.
Jan 29 11:16:52 np0005601226 systemd-logind[823]: New session 5 of user zuul.
Jan 29 11:16:52 np0005601226 systemd[1]: Started Session 5 of User zuul.
Jan 29 11:16:52 np0005601226 python3[8103]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 11:17:03 np0005601226 setsebool[8142]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 29 11:17:03 np0005601226 setsebool[8142]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 29 11:17:14 np0005601226 kernel: SELinux:  Converting 385 SID table entries...
Jan 29 11:17:14 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 11:17:14 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 11:17:14 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 11:17:14 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 11:17:14 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 11:17:14 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 11:17:14 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 11:17:24 np0005601226 kernel: SELinux:  Converting 388 SID table entries...
Jan 29 11:17:24 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 11:17:24 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 11:17:24 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 11:17:24 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 11:17:24 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 11:17:24 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 11:17:24 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 11:17:43 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 29 11:17:43 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 11:17:43 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 11:17:43 np0005601226 systemd[1]: Reloading.
Jan 29 11:17:43 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:17:43 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 11:17:48 np0005601226 python3[13893]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-eb3b-91f8-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:17:49 np0005601226 kernel: evm: overlay not supported
Jan 29 11:17:49 np0005601226 systemd[4309]: Starting D-Bus User Message Bus...
Jan 29 11:17:49 np0005601226 dbus-broker-launch[14200]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 29 11:17:49 np0005601226 dbus-broker-launch[14200]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 29 11:17:49 np0005601226 systemd[4309]: Started D-Bus User Message Bus.
Jan 29 11:17:49 np0005601226 dbus-broker-lau[14200]: Ready
Jan 29 11:17:49 np0005601226 systemd[4309]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 29 11:17:49 np0005601226 systemd[4309]: Created slice Slice /user.
Jan 29 11:17:49 np0005601226 systemd[4309]: podman-14093.scope: unit configures an IP firewall, but not running as root.
Jan 29 11:17:49 np0005601226 systemd[4309]: (This warning is only shown for the first unit using IP firewalling.)
Jan 29 11:17:49 np0005601226 systemd[4309]: Started podman-14093.scope.
Jan 29 11:17:49 np0005601226 systemd[4309]: Started podman-pause-16271cfc.scope.
Jan 29 11:17:49 np0005601226 python3[14746]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.153:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.153:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:17:49 np0005601226 python3[14746]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 29 11:17:52 np0005601226 systemd[1]: session-5.scope: Deactivated successfully.
Jan 29 11:17:52 np0005601226 systemd[1]: session-5.scope: Consumed 42.259s CPU time.
Jan 29 11:17:52 np0005601226 systemd-logind[823]: Session 5 logged out. Waiting for processes to exit.
Jan 29 11:17:52 np0005601226 systemd-logind[823]: Removed session 5.
Jan 29 11:18:17 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 11:18:17 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 11:18:17 np0005601226 systemd[1]: man-db-cache-update.service: Consumed 35.517s CPU time.
Jan 29 11:18:17 np0005601226 systemd[1]: run-r4cfda640e7204d4a82c214837eedb768.service: Deactivated successfully.
Jan 29 11:18:19 np0005601226 systemd-logind[823]: New session 6 of user zuul.
Jan 29 11:18:19 np0005601226 systemd[1]: Started Session 6 of User zuul.
Jan 29 11:18:19 np0005601226 python3[29684]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCTDMBn/tpVz94xUAzWhULelMKTxpkTtWzodZbbNRAgcu2rVeMRUR6prfqVdt9rkkHsO8Q+V5LAN1CRBQuX1SU= zuul@np0005601225.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:18:20 np0005601226 python3[29710]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCTDMBn/tpVz94xUAzWhULelMKTxpkTtWzodZbbNRAgcu2rVeMRUR6prfqVdt9rkkHsO8Q+V5LAN1CRBQuX1SU= zuul@np0005601225.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:18:21 np0005601226 python3[29736]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005601226.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 29 11:18:21 np0005601226 python3[29770]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCTDMBn/tpVz94xUAzWhULelMKTxpkTtWzodZbbNRAgcu2rVeMRUR6prfqVdt9rkkHsO8Q+V5LAN1CRBQuX1SU= zuul@np0005601225.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 29 11:18:22 np0005601226 python3[29848]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:18:22 np0005601226 python3[29921]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769703501.9731503-135-156661881961009/source _original_basename=tmp299loum4 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:18:23 np0005601226 python3[29971]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 29 11:18:23 np0005601226 systemd[1]: Starting Hostname Service...
Jan 29 11:18:23 np0005601226 systemd[1]: Started Hostname Service.
Jan 29 11:18:23 np0005601226 systemd-hostnamed[29975]: Changed pretty hostname to 'compute-0'
Jan 29 11:18:23 np0005601226 systemd-hostnamed[29975]: Hostname set to <compute-0> (static)
Jan 29 11:18:23 np0005601226 NetworkManager[7200]: <info>  [1769703503.5244] hostname: static hostname changed from "np0005601226.novalocal" to "compute-0"
Jan 29 11:18:23 np0005601226 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 11:18:23 np0005601226 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 11:18:24 np0005601226 systemd[1]: session-6.scope: Deactivated successfully.
Jan 29 11:18:24 np0005601226 systemd[1]: session-6.scope: Consumed 1.962s CPU time.
Jan 29 11:18:24 np0005601226 systemd-logind[823]: Session 6 logged out. Waiting for processes to exit.
Jan 29 11:18:24 np0005601226 systemd-logind[823]: Removed session 6.
Jan 29 11:18:33 np0005601226 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 11:18:53 np0005601226 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 11:19:14 np0005601226 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 29 11:19:14 np0005601226 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 29 11:19:14 np0005601226 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 29 11:19:14 np0005601226 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 29 11:22:21 np0005601226 systemd-logind[823]: New session 7 of user zuul.
Jan 29 11:22:21 np0005601226 systemd[1]: Started Session 7 of User zuul.
Jan 29 11:22:22 np0005601226 python3[30074]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:22:23 np0005601226 python3[30190]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:22:24 np0005601226 python3[30263]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769703743.447981-33655-85459376364999/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:22:24 np0005601226 python3[30289]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:22:24 np0005601226 python3[30362]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769703743.447981-33655-85459376364999/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:22:24 np0005601226 python3[30388]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:22:25 np0005601226 python3[30461]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769703743.447981-33655-85459376364999/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:22:25 np0005601226 python3[30487]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:22:25 np0005601226 python3[30560]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769703743.447981-33655-85459376364999/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:22:26 np0005601226 python3[30586]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:22:26 np0005601226 python3[30659]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769703743.447981-33655-85459376364999/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:22:26 np0005601226 python3[30685]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:22:26 np0005601226 python3[30758]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769703743.447981-33655-85459376364999/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:22:27 np0005601226 python3[30784]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:22:27 np0005601226 python3[30857]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769703743.447981-33655-85459376364999/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:22:38 np0005601226 python3[30915]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:27:38 np0005601226 systemd[1]: session-7.scope: Deactivated successfully.
Jan 29 11:27:38 np0005601226 systemd[1]: session-7.scope: Consumed 4.224s CPU time.
Jan 29 11:27:38 np0005601226 systemd-logind[823]: Session 7 logged out. Waiting for processes to exit.
Jan 29 11:27:38 np0005601226 systemd-logind[823]: Removed session 7.
Jan 29 11:40:37 np0005601226 systemd-logind[823]: New session 8 of user zuul.
Jan 29 11:40:37 np0005601226 systemd[1]: Started Session 8 of User zuul.
Jan 29 11:40:38 np0005601226 python3.9[31084]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:40:40 np0005601226 python3.9[31265]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:40:47 np0005601226 systemd[1]: session-8.scope: Deactivated successfully.
Jan 29 11:40:47 np0005601226 systemd[1]: session-8.scope: Consumed 7.198s CPU time.
Jan 29 11:40:47 np0005601226 systemd-logind[823]: Session 8 logged out. Waiting for processes to exit.
Jan 29 11:40:47 np0005601226 systemd-logind[823]: Removed session 8.
Jan 29 11:41:03 np0005601226 systemd-logind[823]: New session 9 of user zuul.
Jan 29 11:41:03 np0005601226 systemd[1]: Started Session 9 of User zuul.
Jan 29 11:41:03 np0005601226 python3.9[31475]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 29 11:41:04 np0005601226 python3.9[31649]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:41:05 np0005601226 python3.9[31801]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:41:06 np0005601226 python3.9[31954]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:41:07 np0005601226 python3.9[32106]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:41:08 np0005601226 python3.9[32258]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:41:08 np0005601226 python3.9[32381]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769704867.6539133-68-168378379100710/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:41:09 np0005601226 python3.9[32533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:41:10 np0005601226 python3.9[32689]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:41:10 np0005601226 python3.9[32841]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:41:11 np0005601226 python3.9[32991]: ansible-ansible.builtin.service_facts Invoked
Jan 29 11:41:15 np0005601226 irqbalance[819]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 29 11:41:15 np0005601226 irqbalance[819]: IRQ 26 affinity is now unmanaged
Jan 29 11:41:18 np0005601226 python3.9[33244]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:41:19 np0005601226 python3.9[33394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:41:20 np0005601226 python3.9[33548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:41:20 np0005601226 python3.9[33706]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:41:21 np0005601226 python3.9[33790]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:42:12 np0005601226 systemd[1]: Reloading.
Jan 29 11:42:12 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:42:12 np0005601226 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 29 11:42:13 np0005601226 systemd[1]: Reloading.
Jan 29 11:42:13 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:42:13 np0005601226 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 29 11:42:13 np0005601226 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 29 11:42:13 np0005601226 systemd[1]: Reloading.
Jan 29 11:42:13 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:42:13 np0005601226 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 29 11:42:13 np0005601226 dbus-broker-launch[813]: Noticed file-system modification, trigger reload.
Jan 29 11:42:13 np0005601226 dbus-broker-launch[813]: Noticed file-system modification, trigger reload.
Jan 29 11:42:13 np0005601226 dbus-broker-launch[813]: Noticed file-system modification, trigger reload.
Jan 29 11:43:18 np0005601226 kernel: SELinux:  Converting 2726 SID table entries...
Jan 29 11:43:18 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 11:43:18 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 11:43:18 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 11:43:18 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 11:43:18 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 11:43:18 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 11:43:18 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 11:43:18 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 29 11:43:18 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 11:43:18 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 11:43:18 np0005601226 systemd[1]: Reloading.
Jan 29 11:43:18 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:43:18 np0005601226 systemd[1]: Starting dnf makecache...
Jan 29 11:43:18 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 11:43:19 np0005601226 dnf[34521]: Failed determining last makecache time.
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-barbican-42b4c41831408a8e323 131 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 193 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-cinder-1c00d6490d88e436f26ef 205 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-python-stevedore-c4acc5639fd2329372142 198 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-python-cloudkitty-tests-tempest-2c80f8 191 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-os-refresh-config-9bfc52b5049be2d8de61 211 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 178 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-python-designate-tests-tempest-347fdbc 190 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-glance-1fd12c29b339f30fe823e 187 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 186 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-manila-3c01b7181572c95dac462 181 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-python-whitebox-neutron-tests-tempest- 142 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-octavia-ba397f07a7331190208c 159 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-watcher-c014f81a8647287f6dcc 183 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-ansible-config_template-5ccaa22121a7ff 218 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 229 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-swift-dc98a8463506ac520c469a 197 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-python-tempestconf-8515371b7cceebd4282 155 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: delorean-openstack-heat-ui-013accbfd179753bc3f0 108 kB/s | 3.0 kB     00:00
Jan 29 11:43:19 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 11:43:19 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 11:43:19 np0005601226 systemd[1]: run-r5b2f5b3f9c674f3b885ca9ad1303ae06.service: Deactivated successfully.
Jan 29 11:43:19 np0005601226 dnf[34521]: CentOS Stream 9 - BaseOS                         28 kB/s | 6.4 kB     00:00
Jan 29 11:43:19 np0005601226 dnf[34521]: CentOS Stream 9 - AppStream                      64 kB/s | 6.5 kB     00:00
Jan 29 11:43:19 np0005601226 python3.9[35337]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:43:20 np0005601226 dnf[34521]: CentOS Stream 9 - CRB                            68 kB/s | 6.3 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: CentOS Stream 9 - Extras packages                70 kB/s | 7.3 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: dlrn-antelope-testing                           107 kB/s | 3.0 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: dlrn-antelope-build-deps                        108 kB/s | 3.0 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: centos9-rabbitmq                                115 kB/s | 3.0 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: centos9-storage                                 122 kB/s | 3.0 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: centos9-opstools                                104 kB/s | 3.0 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: NFV SIG OpenvSwitch                             106 kB/s | 3.0 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: repo-setup-centos-appstream                     166 kB/s | 4.4 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: repo-setup-centos-baseos                        143 kB/s | 3.9 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: repo-setup-centos-highavailability              178 kB/s | 3.9 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: repo-setup-centos-powertools                    196 kB/s | 4.3 kB     00:00
Jan 29 11:43:20 np0005601226 dnf[34521]: Extra Packages for Enterprise Linux 9 - x86_64  242 kB/s |  30 kB     00:00
Jan 29 11:43:21 np0005601226 dnf[34521]: Metadata cache created.
Jan 29 11:43:21 np0005601226 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 29 11:43:21 np0005601226 systemd[1]: Finished dnf makecache.
Jan 29 11:43:21 np0005601226 systemd[1]: dnf-makecache.service: Consumed 1.685s CPU time.
Jan 29 11:43:22 np0005601226 python3.9[35640]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 29 11:43:23 np0005601226 python3.9[35792]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 29 11:43:25 np0005601226 python3.9[35945]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:43:26 np0005601226 python3.9[36097]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 29 11:43:27 np0005601226 python3.9[36249]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:43:27 np0005601226 python3.9[36401]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:43:28 np0005601226 python3.9[36524]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705007.3474119-231-37022858749808/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=f87409c7a2bcf84eee086b0818eff77723c67465 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:43:29 np0005601226 python3.9[36676]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:43:32 np0005601226 python3.9[36828]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:43:33 np0005601226 python3.9[36982]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:43:34 np0005601226 python3.9[37134]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 29 11:43:34 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 11:43:34 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 11:43:34 np0005601226 python3.9[37288]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 11:43:35 np0005601226 python3.9[37446]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 11:43:36 np0005601226 python3.9[37606]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 29 11:43:36 np0005601226 python3.9[37759]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 11:43:37 np0005601226 python3.9[37917]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 29 11:43:38 np0005601226 python3.9[38069]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:43:40 np0005601226 python3.9[38222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:43:41 np0005601226 python3.9[38374]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:43:41 np0005601226 python3.9[38497]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705020.684697-350-263360235800787/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:43:42 np0005601226 python3.9[38649]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:43:42 np0005601226 systemd[1]: Starting Load Kernel Modules...
Jan 29 11:43:42 np0005601226 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 29 11:43:42 np0005601226 kernel: Bridge firewalling registered
Jan 29 11:43:42 np0005601226 systemd-modules-load[38653]: Inserted module 'br_netfilter'
Jan 29 11:43:42 np0005601226 systemd[1]: Finished Load Kernel Modules.
Jan 29 11:43:43 np0005601226 python3.9[38809]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:43:43 np0005601226 python3.9[38932]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705022.9444127-373-207157034442943/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:43:44 np0005601226 python3.9[39084]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:43:48 np0005601226 dbus-broker-launch[813]: Noticed file-system modification, trigger reload.
Jan 29 11:43:48 np0005601226 dbus-broker-launch[813]: Noticed file-system modification, trigger reload.
Jan 29 11:43:48 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 11:43:48 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 11:43:48 np0005601226 systemd[1]: Reloading.
Jan 29 11:43:48 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:43:48 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 11:43:50 np0005601226 python3.9[41558]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:43:51 np0005601226 python3.9[42869]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 29 11:43:51 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 11:43:51 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 11:43:51 np0005601226 systemd[1]: man-db-cache-update.service: Consumed 3.000s CPU time.
Jan 29 11:43:51 np0005601226 systemd[1]: run-r4882b062fd1c423c8622d120a5271a92.service: Deactivated successfully.
Jan 29 11:43:51 np0005601226 python3.9[43136]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:43:52 np0005601226 python3.9[43288]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:43:52 np0005601226 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 29 11:43:52 np0005601226 systemd[1]: Starting Authorization Manager...
Jan 29 11:43:52 np0005601226 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 29 11:43:52 np0005601226 polkitd[43505]: Started polkitd version 0.117
Jan 29 11:43:52 np0005601226 systemd[1]: Started Authorization Manager.
Jan 29 11:43:53 np0005601226 python3.9[43675]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:43:53 np0005601226 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 29 11:43:53 np0005601226 systemd[1]: tuned.service: Deactivated successfully.
Jan 29 11:43:53 np0005601226 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 29 11:43:53 np0005601226 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 29 11:43:54 np0005601226 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 29 11:43:55 np0005601226 python3.9[43837]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 29 11:43:57 np0005601226 python3.9[43989]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:43:57 np0005601226 systemd[1]: Reloading.
Jan 29 11:43:57 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:43:58 np0005601226 python3.9[44178]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:43:58 np0005601226 systemd[1]: Reloading.
Jan 29 11:43:58 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:43:59 np0005601226 python3.9[44367]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:44:00 np0005601226 python3.9[44520]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:44:00 np0005601226 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 29 11:44:01 np0005601226 python3.9[44673]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:44:03 np0005601226 python3.9[44835]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:44:03 np0005601226 python3.9[44988]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:44:03 np0005601226 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 11:44:03 np0005601226 systemd[1]: Stopped Apply Kernel Variables.
Jan 29 11:44:04 np0005601226 systemd[1]: Stopping Apply Kernel Variables...
Jan 29 11:44:04 np0005601226 systemd[1]: Starting Apply Kernel Variables...
Jan 29 11:44:04 np0005601226 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 29 11:44:04 np0005601226 systemd[1]: Finished Apply Kernel Variables.
Jan 29 11:44:04 np0005601226 systemd[1]: session-9.scope: Deactivated successfully.
Jan 29 11:44:04 np0005601226 systemd[1]: session-9.scope: Consumed 2min 1.808s CPU time.
Jan 29 11:44:04 np0005601226 systemd-logind[823]: Session 9 logged out. Waiting for processes to exit.
Jan 29 11:44:04 np0005601226 systemd-logind[823]: Removed session 9.
Jan 29 11:44:09 np0005601226 systemd-logind[823]: New session 10 of user zuul.
Jan 29 11:44:09 np0005601226 systemd[1]: Started Session 10 of User zuul.
Jan 29 11:44:10 np0005601226 python3.9[45173]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:44:11 np0005601226 python3.9[45329]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 29 11:44:12 np0005601226 python3.9[45482]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 11:44:13 np0005601226 python3.9[45640]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 11:44:14 np0005601226 python3.9[45800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:44:15 np0005601226 python3.9[45884]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 11:44:17 np0005601226 python3.9[46047]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:44:33 np0005601226 kernel: SELinux:  Converting 2739 SID table entries...
Jan 29 11:44:33 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 11:44:33 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 11:44:33 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 11:44:33 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 11:44:33 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 11:44:33 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 11:44:33 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 11:44:34 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 29 11:44:34 np0005601226 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 29 11:44:35 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 11:44:35 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 11:44:35 np0005601226 systemd[1]: Reloading.
Jan 29 11:44:35 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:44:35 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:44:35 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 11:44:36 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 11:44:36 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 11:44:36 np0005601226 systemd[1]: run-r72ae7bbeeb96472d9ee562735af6b6d2.service: Deactivated successfully.
Jan 29 11:44:37 np0005601226 python3.9[47145]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 11:44:37 np0005601226 systemd[1]: Reloading.
Jan 29 11:44:37 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:44:37 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:44:37 np0005601226 systemd[1]: Starting Open vSwitch Database Unit...
Jan 29 11:44:37 np0005601226 chown[47186]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 29 11:44:37 np0005601226 ovs-ctl[47191]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 29 11:44:37 np0005601226 ovs-ctl[47191]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 29 11:44:37 np0005601226 ovs-ctl[47191]: Starting ovsdb-server [  OK  ]
Jan 29 11:44:37 np0005601226 ovs-vsctl[47240]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 29 11:44:37 np0005601226 ovs-vsctl[47260]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ea6bcc65-2563-4fe6-9039-bca7261f4cf7\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 29 11:44:37 np0005601226 ovs-ctl[47191]: Configuring Open vSwitch system IDs [  OK  ]
Jan 29 11:44:37 np0005601226 ovs-ctl[47191]: Enabling remote OVSDB managers [  OK  ]
Jan 29 11:44:37 np0005601226 systemd[1]: Started Open vSwitch Database Unit.
Jan 29 11:44:37 np0005601226 ovs-vsctl[47266]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 29 11:44:37 np0005601226 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 29 11:44:37 np0005601226 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 29 11:44:37 np0005601226 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 29 11:44:37 np0005601226 kernel: openvswitch: Open vSwitch switching datapath
Jan 29 11:44:37 np0005601226 ovs-ctl[47310]: Inserting openvswitch module [  OK  ]
Jan 29 11:44:37 np0005601226 ovs-ctl[47279]: Starting ovs-vswitchd [  OK  ]
Jan 29 11:44:37 np0005601226 ovs-ctl[47279]: Enabling remote OVSDB managers [  OK  ]
Jan 29 11:44:37 np0005601226 ovs-vsctl[47328]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 29 11:44:37 np0005601226 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 29 11:44:37 np0005601226 systemd[1]: Starting Open vSwitch...
Jan 29 11:44:37 np0005601226 systemd[1]: Finished Open vSwitch.
Jan 29 11:44:38 np0005601226 python3.9[47479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:44:39 np0005601226 python3.9[47631]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 29 11:44:40 np0005601226 kernel: SELinux:  Converting 2753 SID table entries...
Jan 29 11:44:40 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 11:44:40 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 11:44:40 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 11:44:40 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 11:44:40 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 11:44:40 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 11:44:40 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 11:44:41 np0005601226 python3.9[47786]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:44:41 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 29 11:44:42 np0005601226 python3.9[47944]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:44:43 np0005601226 python3.9[48097]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:44:45 np0005601226 python3.9[48384]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 29 11:44:45 np0005601226 python3.9[48534]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:44:46 np0005601226 python3.9[48688]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:44:48 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 11:44:48 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 11:44:48 np0005601226 systemd[1]: Reloading.
Jan 29 11:44:48 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:44:48 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:44:48 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 11:44:48 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 11:44:48 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 11:44:48 np0005601226 systemd[1]: run-r03d11b1d2209450b8025ac4381ed003a.service: Deactivated successfully.
Jan 29 11:44:49 np0005601226 python3.9[49004]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:44:49 np0005601226 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 29 11:44:49 np0005601226 systemd[1]: Stopped Network Manager Wait Online.
Jan 29 11:44:49 np0005601226 systemd[1]: Stopping Network Manager Wait Online...
Jan 29 11:44:49 np0005601226 systemd[1]: Stopping Network Manager...
Jan 29 11:44:49 np0005601226 NetworkManager[7200]: <info>  [1769705089.5940] caught SIGTERM, shutting down normally.
Jan 29 11:44:49 np0005601226 NetworkManager[7200]: <info>  [1769705089.5958] dhcp4 (eth0): canceled DHCP transaction
Jan 29 11:44:49 np0005601226 NetworkManager[7200]: <info>  [1769705089.5959] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:44:49 np0005601226 NetworkManager[7200]: <info>  [1769705089.5959] dhcp4 (eth0): state changed no lease
Jan 29 11:44:49 np0005601226 NetworkManager[7200]: <info>  [1769705089.5965] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 11:44:49 np0005601226 NetworkManager[7200]: <info>  [1769705089.6025] exiting (success)
Jan 29 11:44:49 np0005601226 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 11:44:49 np0005601226 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 11:44:49 np0005601226 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 29 11:44:49 np0005601226 systemd[1]: Stopped Network Manager.
Jan 29 11:44:49 np0005601226 systemd[1]: NetworkManager.service: Consumed 16.339s CPU time, 4.3M memory peak, read 0B from disk, written 28.0K to disk.
Jan 29 11:44:49 np0005601226 systemd[1]: Starting Network Manager...
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.6804] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:9485f3a0-b546-449b-a1de-1a80f8dff8e7)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.6806] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.6880] manager[0x556aa12cc000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 29 11:44:49 np0005601226 systemd[1]: Starting Hostname Service...
Jan 29 11:44:49 np0005601226 systemd[1]: Started Hostname Service.
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7809] hostname: hostname: using hostnamed
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7810] hostname: static hostname changed from (none) to "compute-0"
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7817] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7822] manager[0x556aa12cc000]: rfkill: Wi-Fi hardware radio set enabled
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7823] manager[0x556aa12cc000]: rfkill: WWAN hardware radio set enabled
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7854] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7868] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7869] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7870] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7870] manager: Networking is enabled by state file
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7875] settings: Loaded settings plugin: keyfile (internal)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7880] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7921] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7934] dhcp: init: Using DHCP client 'internal'
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7938] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7948] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7955] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7968] device (lo): Activation: starting connection 'lo' (fb19d968-2132-4ea2-ac78-a40c265fabbe)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7977] device (eth0): carrier: link connected
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7984] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7991] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.7993] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8005] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8016] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8025] device (eth1): carrier: link connected
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8031] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8038] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (56464cd3-98aa-5bfb-ab19-69dd3436ca20) (indicated)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8039] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8046] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8057] device (eth1): Activation: starting connection 'ci-private-network' (56464cd3-98aa-5bfb-ab19-69dd3436ca20)
Jan 29 11:44:49 np0005601226 systemd[1]: Started Network Manager.
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8067] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8077] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8080] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8084] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8089] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8094] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8099] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8104] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8110] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8120] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8126] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8140] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8167] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8178] dhcp4 (eth0): state changed new lease, address=38.129.56.71
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8185] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 29 11:44:49 np0005601226 systemd[1]: Starting Network Manager Wait Online...
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8269] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8275] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8281] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8289] device (lo): Activation: successful, device activated.
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8297] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8299] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8303] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8306] device (eth1): Activation: successful, device activated.
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8318] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8320] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8326] manager: NetworkManager state is now CONNECTED_SITE
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8333] device (eth0): Activation: successful, device activated.
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8340] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 29 11:44:49 np0005601226 NetworkManager[49020]: <info>  [1769705089.8347] manager: startup complete
Jan 29 11:44:49 np0005601226 systemd[1]: Finished Network Manager Wait Online.
Jan 29 11:44:50 np0005601226 python3.9[49231]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:44:54 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 11:44:54 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 11:44:54 np0005601226 systemd[1]: Reloading.
Jan 29 11:44:54 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:44:55 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:44:55 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 11:44:55 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 11:44:55 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 11:44:55 np0005601226 systemd[1]: run-r6b25c2b519404d90869c2a18554d7f0b.service: Deactivated successfully.
Jan 29 11:44:56 np0005601226 python3.9[49689]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:44:57 np0005601226 python3.9[49841]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:44:58 np0005601226 python3.9[49995]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:44:58 np0005601226 python3.9[50147]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:44:59 np0005601226 python3.9[50299]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:44:59 np0005601226 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 11:45:00 np0005601226 python3.9[50451]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:00 np0005601226 python3.9[50603]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:45:01 np0005601226 python3.9[50726]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705100.2190435-224-121980927323726/.source _original_basename=.lp3ki773 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:01 np0005601226 python3.9[50878]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:02 np0005601226 python3.9[51030]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 29 11:45:03 np0005601226 python3.9[51182]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:05 np0005601226 python3.9[51609]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 29 11:45:06 np0005601226 ansible-async_wrapper.py[51784]: Invoked with j424600272663 300 /home/zuul/.ansible/tmp/ansible-tmp-1769705105.6001027-290-144946689414740/AnsiballZ_edpm_os_net_config.py _
Jan 29 11:45:06 np0005601226 ansible-async_wrapper.py[51787]: Starting module and watcher
Jan 29 11:45:06 np0005601226 ansible-async_wrapper.py[51787]: Start watching 51788 (300)
Jan 29 11:45:06 np0005601226 ansible-async_wrapper.py[51788]: Start module (51788)
Jan 29 11:45:06 np0005601226 ansible-async_wrapper.py[51784]: Return async_wrapper task started.
Jan 29 11:45:06 np0005601226 python3.9[51789]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 29 11:45:07 np0005601226 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 29 11:45:07 np0005601226 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 29 11:45:07 np0005601226 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 29 11:45:07 np0005601226 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 29 11:45:07 np0005601226 kernel: cfg80211: failed to load regulatory.db
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.3693] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.3709] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4172] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4173] audit: op="connection-add" uuid="c7995426-cf9a-489a-a19e-12a93662cbf4" name="br-ex-br" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4186] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4187] audit: op="connection-add" uuid="fa77d752-83c6-4753-8a2e-9141902fc472" name="br-ex-port" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4196] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4197] audit: op="connection-add" uuid="c0f4b6a4-91c7-4243-ac07-a54c3d46ae95" name="eth1-port" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4206] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4206] audit: op="connection-add" uuid="c1092b34-3c0f-4289-8349-a88e1f6c02ae" name="vlan20-port" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4216] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4217] audit: op="connection-add" uuid="92bdd0e1-2c76-40c1-97ec-b0de38b59df1" name="vlan21-port" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4225] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4226] audit: op="connection-add" uuid="c4efd1ff-338b-4264-afaa-2af5f620ee39" name="vlan22-port" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4235] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4235] audit: op="connection-add" uuid="ef74bf03-5481-4576-b1d7-aba807d64683" name="vlan23-port" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4250] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.timestamp,connection.autoconnect-priority" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4265] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4265] audit: op="connection-add" uuid="e830c7ba-b329-4988-8ff0-a37057d2dec0" name="br-ex-if" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4304] audit: op="connection-update" uuid="56464cd3-98aa-5bfb-ab19-69dd3436ca20" name="ci-private-network" args="ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.dns,ipv6.routes,ipv6.routing-rules,ovs-interface.type,ovs-external-ids.data,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.never-default,ipv4.routing-rules,connection.master,connection.slave-type,connection.port-type,connection.controller,connection.timestamp" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4317] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4318] audit: op="connection-add" uuid="64e9b276-8619-4de1-a0ca-4c0daf406480" name="vlan20-if" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4331] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4332] audit: op="connection-add" uuid="32772f57-d42c-40f1-b50f-78010b4febc1" name="vlan21-if" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4346] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4348] audit: op="connection-add" uuid="19b81b7f-30f8-4655-907b-2de1e510701a" name="vlan22-if" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4362] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4363] audit: op="connection-add" uuid="beecaf59-14d2-4d81-89ad-f902311025cf" name="vlan23-if" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4374] audit: op="connection-delete" uuid="68a2636e-95dd-355f-8ced-e2552f46817a" name="Wired connection 1" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4383] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4385] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4390] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4392] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c7995426-cf9a-489a-a19e-12a93662cbf4)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4393] audit: op="connection-activate" uuid="c7995426-cf9a-489a-a19e-12a93662cbf4" name="br-ex-br" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4394] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4394] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4397] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4399] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (fa77d752-83c6-4753-8a2e-9141902fc472)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4400] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4401] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4403] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4406] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (c0f4b6a4-91c7-4243-ac07-a54c3d46ae95)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4407] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4407] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4410] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4413] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (c1092b34-3c0f-4289-8349-a88e1f6c02ae)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4414] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4415] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4418] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4420] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (92bdd0e1-2c76-40c1-97ec-b0de38b59df1)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4421] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4421] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4425] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4427] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c4efd1ff-338b-4264-afaa-2af5f620ee39)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4428] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4428] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4432] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4434] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (ef74bf03-5481-4576-b1d7-aba807d64683)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4434] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4436] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4437] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4440] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4441] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4443] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4445] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e830c7ba-b329-4988-8ff0-a37057d2dec0)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4445] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4447] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4448] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4449] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4449] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4456] device (eth1): disconnecting for new activation request.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4457] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4458] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4459] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4460] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4461] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4462] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4464] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4466] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (64e9b276-8619-4de1-a0ca-4c0daf406480)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4466] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4470] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4471] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4471] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4473] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4473] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4475] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4478] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (32772f57-d42c-40f1-b50f-78010b4febc1)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4480] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4481] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4482] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4483] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4485] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4485] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4487] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4489] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (19b81b7f-30f8-4655-907b-2de1e510701a)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4489] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4491] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4492] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4493] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4495] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <warn>  [1769705108.4495] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4497] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4500] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (beecaf59-14d2-4d81-89ad-f902311025cf)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4500] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4502] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4503] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4503] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4504] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4513] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4515] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4517] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4518] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4523] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4526] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4529] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4532] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4534] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4538] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 kernel: ovs-system: entered promiscuous mode
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4541] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4544] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4546] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4550] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4554] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4556] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4557] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 kernel: Timeout policy base is empty
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4562] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 systemd-udevd[51796]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4565] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4567] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4568] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4572] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4575] dhcp4 (eth0): canceled DHCP transaction
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4575] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4575] dhcp4 (eth0): state changed no lease
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4576] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4584] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4635] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4638] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51790 uid=0 result="fail" reason="Device is not activated"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4643] dhcp4 (eth0): state changed new lease, address=38.129.56.71
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4646] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4687] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4699] device (eth1): disconnecting for new activation request.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4700] audit: op="connection-activate" uuid="56464cd3-98aa-5bfb-ab19-69dd3436ca20" name="ci-private-network" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4702] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4745] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51790 uid=0 result="success"
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4746] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4847] device (eth1): Activation: starting connection 'ci-private-network' (56464cd3-98aa-5bfb-ab19-69dd3436ca20)
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4851] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4858] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4861] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4865] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4868] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4872] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4873] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4874] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4875] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4876] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4877] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4888] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4894] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4897] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4899] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4903] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4906] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4908] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4911] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4914] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4918] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4921] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4924] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4928] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4933] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4938] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 kernel: br-ex: entered promiscuous mode
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4980] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4984] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.4989] device (eth1): Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5088] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5097] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 kernel: vlan22: entered promiscuous mode
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5146] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5148] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5153] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 kernel: vlan23: entered promiscuous mode
Jan 29 11:45:08 np0005601226 systemd-udevd[51794]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5244] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 29 11:45:08 np0005601226 kernel: vlan21: entered promiscuous mode
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5252] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 29 11:45:08 np0005601226 systemd-udevd[51795]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5272] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5285] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5297] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5299] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5304] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5313] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5320] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 kernel: vlan20: entered promiscuous mode
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5327] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5374] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5390] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5393] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5410] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5419] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5421] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5427] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5436] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5438] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 29 11:45:08 np0005601226 NetworkManager[49020]: <info>  [1769705108.5443] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 29 11:45:09 np0005601226 NetworkManager[49020]: <info>  [1769705109.6609] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51790 uid=0 result="success"
Jan 29 11:45:09 np0005601226 NetworkManager[49020]: <info>  [1769705109.8910] checkpoint[0x556aa12a1950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 29 11:45:09 np0005601226 NetworkManager[49020]: <info>  [1769705109.8913] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51790 uid=0 result="success"
Jan 29 11:45:10 np0005601226 python3.9[52148]: ansible-ansible.legacy.async_status Invoked with jid=j424600272663.51784 mode=status _async_dir=/root/.ansible_async
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.2086] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51790 uid=0 result="success"
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.2095] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51790 uid=0 result="success"
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.3898] audit: op="networking-control" arg="global-dns-configuration" pid=51790 uid=0 result="success"
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.3931] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.3967] audit: op="networking-control" arg="global-dns-configuration" pid=51790 uid=0 result="success"
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.3982] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51790 uid=0 result="success"
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.5273] checkpoint[0x556aa12a1a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 29 11:45:10 np0005601226 NetworkManager[49020]: <info>  [1769705110.5277] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51790 uid=0 result="success"
Jan 29 11:45:10 np0005601226 ansible-async_wrapper.py[51788]: Module complete (51788)
Jan 29 11:45:11 np0005601226 ansible-async_wrapper.py[51787]: Done in kid B.
Jan 29 11:45:13 np0005601226 python3.9[52253]: ansible-ansible.legacy.async_status Invoked with jid=j424600272663.51784 mode=status _async_dir=/root/.ansible_async
Jan 29 11:45:14 np0005601226 python3.9[52353]: ansible-ansible.legacy.async_status Invoked with jid=j424600272663.51784 mode=cleanup _async_dir=/root/.ansible_async
Jan 29 11:45:14 np0005601226 python3.9[52505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:45:15 np0005601226 python3.9[52628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705114.2793179-317-242404344337558/.source.returncode _original_basename=.0wvnwwyv follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:15 np0005601226 python3.9[52780]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:45:16 np0005601226 python3.9[52903]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705115.5209994-333-59601037007920/.source.cfg _original_basename=.8x5gbe7c follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:17 np0005601226 python3.9[53056]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:45:17 np0005601226 systemd[1]: Reloading Network Manager...
Jan 29 11:45:17 np0005601226 NetworkManager[49020]: <info>  [1769705117.2702] audit: op="reload" arg="0" pid=53060 uid=0 result="success"
Jan 29 11:45:17 np0005601226 NetworkManager[49020]: <info>  [1769705117.2708] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 29 11:45:17 np0005601226 systemd[1]: Reloaded Network Manager.
Jan 29 11:45:17 np0005601226 systemd[1]: session-10.scope: Deactivated successfully.
Jan 29 11:45:17 np0005601226 systemd[1]: session-10.scope: Consumed 42.169s CPU time.
Jan 29 11:45:17 np0005601226 systemd-logind[823]: Session 10 logged out. Waiting for processes to exit.
Jan 29 11:45:17 np0005601226 systemd-logind[823]: Removed session 10.
Jan 29 11:45:19 np0005601226 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 29 11:45:23 np0005601226 systemd-logind[823]: New session 11 of user zuul.
Jan 29 11:45:23 np0005601226 systemd[1]: Started Session 11 of User zuul.
Jan 29 11:45:24 np0005601226 python3.9[53247]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:45:25 np0005601226 python3.9[53401]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:45:26 np0005601226 python3.9[53594]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:45:26 np0005601226 systemd[1]: session-11.scope: Deactivated successfully.
Jan 29 11:45:26 np0005601226 systemd[1]: session-11.scope: Consumed 2.136s CPU time.
Jan 29 11:45:26 np0005601226 systemd-logind[823]: Session 11 logged out. Waiting for processes to exit.
Jan 29 11:45:26 np0005601226 systemd-logind[823]: Removed session 11.
Jan 29 11:45:27 np0005601226 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 29 11:45:33 np0005601226 systemd-logind[823]: New session 12 of user zuul.
Jan 29 11:45:33 np0005601226 systemd[1]: Started Session 12 of User zuul.
Jan 29 11:45:34 np0005601226 python3.9[53777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:45:35 np0005601226 python3.9[53931]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:45:36 np0005601226 python3.9[54087]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:45:37 np0005601226 python3.9[54172]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:45:39 np0005601226 python3.9[54325]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:45:40 np0005601226 python3.9[54521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:40 np0005601226 python3.9[54673]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:45:40 np0005601226 podman[54674]: 2026-01-29 16:45:40.935001273 +0000 UTC m=+0.047755619 system refresh
Jan 29 11:45:41 np0005601226 python3.9[54837]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:45:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:45:42 np0005601226 python3.9[54960]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705141.1249337-74-94512648090510/.source.json follow=False _original_basename=podman_network_config.j2 checksum=90e059c6404966d81b8705cbe32fe33ca0d6a8fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:45:43 np0005601226 python3.9[55112]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:45:43 np0005601226 python3.9[55235]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705142.632476-89-80894914951387/.source.conf follow=False _original_basename=registries.conf.j2 checksum=7871978d2230902319f4568d59e283e443460fff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:45:44 np0005601226 python3.9[55387]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:45:44 np0005601226 python3.9[55539]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:45:45 np0005601226 python3.9[55691]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:45:46 np0005601226 python3.9[55843]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:45:47 np0005601226 python3.9[55995]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:45:49 np0005601226 python3.9[56148]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:45:49 np0005601226 python3.9[56302]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:45:50 np0005601226 python3.9[56454]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:45:51 np0005601226 python3.9[56606]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:45:52 np0005601226 python3.9[56759]: ansible-service_facts Invoked
Jan 29 11:45:52 np0005601226 network[56776]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 11:45:52 np0005601226 network[56777]: 'network-scripts' will be removed from distribution in near future.
Jan 29 11:45:52 np0005601226 network[56778]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 11:45:56 np0005601226 python3.9[57230]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:45:58 np0005601226 python3.9[57383]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 29 11:45:59 np0005601226 python3.9[57535]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:00 np0005601226 python3.9[57660]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705159.5142787-233-45546759308083/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:01 np0005601226 python3.9[57814]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:01 np0005601226 python3.9[57939]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705160.7972207-248-47165552690186/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:03 np0005601226 python3.9[58093]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:04 np0005601226 python3.9[58247]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:46:05 np0005601226 python3.9[58331]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:46:06 np0005601226 python3.9[58485]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:46:07 np0005601226 python3.9[58569]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:46:07 np0005601226 chronyd[832]: chronyd exiting
Jan 29 11:46:07 np0005601226 systemd[1]: Stopping NTP client/server...
Jan 29 11:46:07 np0005601226 systemd[1]: chronyd.service: Deactivated successfully.
Jan 29 11:46:07 np0005601226 systemd[1]: Stopped NTP client/server.
Jan 29 11:46:07 np0005601226 systemd[1]: Starting NTP client/server...
Jan 29 11:46:07 np0005601226 chronyd[58577]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 29 11:46:07 np0005601226 chronyd[58577]: Frequency -25.958 +/- 0.162 ppm read from /var/lib/chrony/drift
Jan 29 11:46:07 np0005601226 chronyd[58577]: Loaded seccomp filter (level 2)
Jan 29 11:46:07 np0005601226 systemd[1]: Started NTP client/server.
Jan 29 11:46:07 np0005601226 systemd-logind[823]: Session 12 logged out. Waiting for processes to exit.
Jan 29 11:46:07 np0005601226 systemd[1]: session-12.scope: Deactivated successfully.
Jan 29 11:46:07 np0005601226 systemd[1]: session-12.scope: Consumed 22.804s CPU time.
Jan 29 11:46:07 np0005601226 systemd-logind[823]: Removed session 12.
Jan 29 11:46:14 np0005601226 systemd-logind[823]: New session 13 of user zuul.
Jan 29 11:46:14 np0005601226 systemd[1]: Started Session 13 of User zuul.
Jan 29 11:46:15 np0005601226 python3.9[58758]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:15 np0005601226 python3.9[58910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:16 np0005601226 python3.9[59033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705175.258023-29-6140188418635/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:17 np0005601226 systemd[1]: session-13.scope: Deactivated successfully.
Jan 29 11:46:17 np0005601226 systemd[1]: session-13.scope: Consumed 1.397s CPU time.
Jan 29 11:46:17 np0005601226 systemd-logind[823]: Session 13 logged out. Waiting for processes to exit.
Jan 29 11:46:17 np0005601226 systemd-logind[823]: Removed session 13.
Jan 29 11:46:22 np0005601226 systemd-logind[823]: New session 14 of user zuul.
Jan 29 11:46:22 np0005601226 systemd[1]: Started Session 14 of User zuul.
Jan 29 11:46:23 np0005601226 python3.9[59213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:46:25 np0005601226 python3.9[59369]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:26 np0005601226 python3.9[59544]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:26 np0005601226 python3.9[59667]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769705185.2145967-36-98320172015957/.source.json _original_basename=.xxaiu6ts follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:34 np0005601226 python3.9[59819]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:34 np0005601226 python3.9[59942]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705187.3278613-59-50831007353908/.source _original_basename=.twz4zxce follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:35 np0005601226 python3.9[60094]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:46:35 np0005601226 python3.9[60246]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:36 np0005601226 python3.9[60369]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705195.4754546-83-211322180502228/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:46:37 np0005601226 python3.9[60521]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:37 np0005601226 python3.9[60644]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705196.6788216-83-29658012161641/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:46:38 np0005601226 python3.9[60796]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:38 np0005601226 python3.9[60948]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:39 np0005601226 python3.9[61071]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705198.4968507-120-154233177535161/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:40 np0005601226 python3.9[61223]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:40 np0005601226 python3.9[61346]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705199.642944-135-24150107645967/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:41 np0005601226 python3.9[61498]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:46:41 np0005601226 systemd[1]: Reloading.
Jan 29 11:46:41 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:46:41 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:46:42 np0005601226 systemd[1]: Reloading.
Jan 29 11:46:42 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:46:42 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:46:42 np0005601226 systemd[1]: Starting EDPM Container Shutdown...
Jan 29 11:46:42 np0005601226 systemd[1]: Finished EDPM Container Shutdown.
Jan 29 11:46:42 np0005601226 python3.9[61726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:43 np0005601226 python3.9[61849]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705202.4650276-158-123199062758875/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:44 np0005601226 python3.9[62001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:44 np0005601226 python3.9[62124]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705203.648483-173-205364489764267/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:45 np0005601226 python3.9[62276]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:46:45 np0005601226 systemd[1]: Reloading.
Jan 29 11:46:45 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:46:45 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:46:45 np0005601226 systemd[1]: Reloading.
Jan 29 11:46:45 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:46:45 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:46:45 np0005601226 systemd[1]: Starting Create netns directory...
Jan 29 11:46:45 np0005601226 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 29 11:46:45 np0005601226 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 29 11:46:45 np0005601226 systemd[1]: Finished Create netns directory.
Jan 29 11:46:47 np0005601226 python3.9[62502]: ansible-ansible.builtin.service_facts Invoked
Jan 29 11:46:47 np0005601226 network[62519]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 11:46:47 np0005601226 network[62520]: 'network-scripts' will be removed from distribution in near future.
Jan 29 11:46:47 np0005601226 network[62521]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 11:46:50 np0005601226 python3.9[62783]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:46:50 np0005601226 systemd[1]: Reloading.
Jan 29 11:46:50 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:46:50 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:46:50 np0005601226 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 29 11:46:50 np0005601226 iptables.init[62822]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 29 11:46:50 np0005601226 iptables.init[62822]: iptables: Flushing firewall rules: [  OK  ]
Jan 29 11:46:50 np0005601226 systemd[1]: iptables.service: Deactivated successfully.
Jan 29 11:46:50 np0005601226 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 29 11:46:51 np0005601226 python3.9[63018]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:46:52 np0005601226 python3.9[63172]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:46:52 np0005601226 systemd[1]: Reloading.
Jan 29 11:46:52 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:46:52 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:46:52 np0005601226 systemd[1]: Starting Netfilter Tables...
Jan 29 11:46:53 np0005601226 systemd[1]: Finished Netfilter Tables.
Jan 29 11:46:53 np0005601226 python3.9[63364]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:46:54 np0005601226 python3.9[63517]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:55 np0005601226 python3.9[63642]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705214.3569667-242-1171269666514/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:56 np0005601226 python3.9[63795]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:46:56 np0005601226 systemd[1]: Reloading OpenSSH server daemon...
Jan 29 11:46:56 np0005601226 systemd[1]: Reloaded OpenSSH server daemon.
Jan 29 11:46:57 np0005601226 python3.9[63951]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:57 np0005601226 python3.9[64103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:46:58 np0005601226 python3.9[64226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705217.2163227-273-198756933720294/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:46:59 np0005601226 python3.9[64378]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 29 11:46:59 np0005601226 systemd[1]: Starting Time & Date Service...
Jan 29 11:46:59 np0005601226 systemd[1]: Started Time & Date Service.
Jan 29 11:47:00 np0005601226 python3.9[64534]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:00 np0005601226 python3.9[64686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:01 np0005601226 python3.9[64809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705220.4454658-308-205648104890402/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:02 np0005601226 python3.9[64961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:02 np0005601226 python3.9[65084]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705221.6693-323-237406162800756/.source.yaml _original_basename=.sbu4p2p8 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:03 np0005601226 python3.9[65236]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:03 np0005601226 python3.9[65359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705222.9029138-338-211995800491263/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:04 np0005601226 python3.9[65511]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:47:05 np0005601226 python3.9[65664]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:47:06 np0005601226 python3[65817]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 11:47:06 np0005601226 python3.9[65969]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:07 np0005601226 python3.9[66092]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705226.1862805-377-120320096657799/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:08 np0005601226 python3.9[66244]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:08 np0005601226 python3.9[66367]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705227.5144405-392-61055707051349/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:09 np0005601226 python3.9[66519]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:09 np0005601226 python3.9[66642]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705228.8350527-407-242901215121205/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:10 np0005601226 python3.9[66794]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:11 np0005601226 python3.9[66917]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705230.1153784-422-271477690542491/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:12 np0005601226 python3.9[67069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:47:12 np0005601226 python3.9[67192]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705231.6641524-437-8853520570428/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:13 np0005601226 python3.9[67344]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:14 np0005601226 python3.9[67496]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:47:14 np0005601226 python3.9[67655]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:15 np0005601226 python3.9[67808]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:16 np0005601226 python3.9[67960]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:17 np0005601226 python3.9[68112]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 11:47:17 np0005601226 python3.9[68265]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 11:47:18 np0005601226 systemd[1]: session-14.scope: Deactivated successfully.
Jan 29 11:47:18 np0005601226 systemd[1]: session-14.scope: Consumed 34.070s CPU time.
Jan 29 11:47:18 np0005601226 systemd-logind[823]: Session 14 logged out. Waiting for processes to exit.
Jan 29 11:47:18 np0005601226 systemd-logind[823]: Removed session 14.
Jan 29 11:47:23 np0005601226 systemd-logind[823]: New session 15 of user zuul.
Jan 29 11:47:23 np0005601226 systemd[1]: Started Session 15 of User zuul.
Jan 29 11:47:24 np0005601226 python3.9[68446]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 29 11:47:25 np0005601226 python3.9[68598]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:47:26 np0005601226 python3.9[68750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:47:27 np0005601226 python3.9[68902]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCd5zNwTk49BkWGJNPDDV/sc8hC/1zDCe6Dm5iJkZiaTTx9YpkhKdCOUrRj90bot3wB6xIO/H2DSoKOkeo0As62fzH0xHF53uU6JNXvb6euOPWbHiiMCNCjWX81oYAcHSE7UJNEQ8Di2mIFdZ+lWYVfbouhGZTWyrOaad7D3ObU5w0nYF3Svd9NoM+yhNM4TjxbbH653CR5t/oLqngocrbaNwcIsYjSEpqRSHKsB/r7XElll0nOrcsJ+7ZpBcNsu8N3YnkrqBCwWiEJE0cPWTbnwdP3Wy/VTksjGbm2TK6WnQTlO4S36fL5UpagzyDSbcmKBR//t5LKlm+WfzAo6YaZvVpXPjdNnv7I6TMmtAK2Kn3hLtVI01JGwvN4H+Wd1NI9eDwujizBCnN/52nuEaGmPxFCXZeuvWEwweoQrRDzowSQmS4sPw2vTsgxQjeVHBvbqfgOYyHyoImdEsi0xSRY+hKri8iN+bsUbpSpN5Dks+Uuf35l1VvxjLuEdIIBKQ0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIcaeXdS3luBZy5m5YYRna/udoQoiERyfOY7P4nannEI#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFRUXqzSTh9ejcnCJvsqBSbF8l/qFP5rg9YVnq3dh578B8Ap3mLftPcCgZC4ZF9/O1SPID31RHc0Pa6BgTTSBl0=#012 create=True mode=0644 path=/tmp/ansible.tbkfn6z2 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:28 np0005601226 python3.9[69054]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tbkfn6z2' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:47:29 np0005601226 python3.9[69208]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tbkfn6z2 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:29 np0005601226 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 29 11:47:29 np0005601226 systemd[1]: session-15.scope: Deactivated successfully.
Jan 29 11:47:29 np0005601226 systemd[1]: session-15.scope: Consumed 3.277s CPU time.
Jan 29 11:47:29 np0005601226 systemd-logind[823]: Session 15 logged out. Waiting for processes to exit.
Jan 29 11:47:29 np0005601226 systemd-logind[823]: Removed session 15.
Jan 29 11:47:36 np0005601226 systemd-logind[823]: New session 16 of user zuul.
Jan 29 11:47:36 np0005601226 systemd[1]: Started Session 16 of User zuul.
Jan 29 11:47:37 np0005601226 python3.9[69388]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:47:38 np0005601226 python3.9[69544]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 29 11:47:39 np0005601226 python3.9[69698]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:47:40 np0005601226 python3.9[69851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:47:41 np0005601226 python3.9[70004]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:47:42 np0005601226 python3.9[70158]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:47:42 np0005601226 python3.9[70313]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:47:43 np0005601226 systemd[1]: session-16.scope: Deactivated successfully.
Jan 29 11:47:43 np0005601226 systemd[1]: session-16.scope: Consumed 3.893s CPU time.
Jan 29 11:47:43 np0005601226 systemd-logind[823]: Session 16 logged out. Waiting for processes to exit.
Jan 29 11:47:43 np0005601226 systemd-logind[823]: Removed session 16.
Jan 29 11:47:48 np0005601226 systemd-logind[823]: New session 17 of user zuul.
Jan 29 11:47:48 np0005601226 systemd[1]: Started Session 17 of User zuul.
Jan 29 11:47:49 np0005601226 python3.9[70491]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:47:51 np0005601226 python3.9[70647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:47:52 np0005601226 python3.9[70731]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 11:47:55 np0005601226 python3.9[70883]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:47:56 np0005601226 python3.9[71034]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 11:47:57 np0005601226 python3.9[71184]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:47:57 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 11:47:57 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 11:47:58 np0005601226 python3.9[71335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:47:59 np0005601226 systemd[1]: session-17.scope: Deactivated successfully.
Jan 29 11:47:59 np0005601226 systemd[1]: session-17.scope: Consumed 5.889s CPU time.
Jan 29 11:47:59 np0005601226 systemd-logind[823]: Session 17 logged out. Waiting for processes to exit.
Jan 29 11:47:59 np0005601226 systemd-logind[823]: Removed session 17.
Jan 29 11:48:07 np0005601226 systemd-logind[823]: New session 18 of user zuul.
Jan 29 11:48:07 np0005601226 systemd[1]: Started Session 18 of User zuul.
Jan 29 11:48:13 np0005601226 python3[72101]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:48:14 np0005601226 python3[72196]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 11:48:16 np0005601226 python3[72223]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:16 np0005601226 python3[72249]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:16 np0005601226 kernel: loop: module loaded
Jan 29 11:48:16 np0005601226 kernel: loop3: detected capacity change from 0 to 41943040
Jan 29 11:48:17 np0005601226 chronyd[58577]: Selected source 142.4.192.253 (pool.ntp.org)
Jan 29 11:48:17 np0005601226 python3[72284]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:17 np0005601226 lvm[72287]: PV /dev/loop3 not used.
Jan 29 11:48:17 np0005601226 lvm[72296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:48:17 np0005601226 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 29 11:48:17 np0005601226 lvm[72298]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 29 11:48:17 np0005601226 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 29 11:48:17 np0005601226 python3[72376]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:48:18 np0005601226 python3[72449]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705297.6284084-36174-12149530577175/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:48:18 np0005601226 python3[72499]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:48:18 np0005601226 systemd[1]: Reloading.
Jan 29 11:48:19 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:48:19 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:48:19 np0005601226 systemd[1]: Starting Ceph OSD losetup...
Jan 29 11:48:19 np0005601226 bash[72540]: /dev/loop3: [64513]:4329572 (/var/lib/ceph-osd-0.img)
Jan 29 11:48:19 np0005601226 systemd[1]: Finished Ceph OSD losetup.
Jan 29 11:48:19 np0005601226 lvm[72541]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:48:19 np0005601226 lvm[72541]: VG ceph_vg0 finished
Jan 29 11:48:19 np0005601226 python3[72567]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 11:48:21 np0005601226 python3[72594]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:21 np0005601226 python3[72620]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:21 np0005601226 kernel: loop4: detected capacity change from 0 to 41943040
Jan 29 11:48:22 np0005601226 python3[72652]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:22 np0005601226 lvm[72655]: PV /dev/loop4 not used.
Jan 29 11:48:22 np0005601226 lvm[72657]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:48:22 np0005601226 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 29 11:48:22 np0005601226 lvm[72661]:  1 logical volume(s) in volume group "ceph_vg1" now active
Jan 29 11:48:22 np0005601226 lvm[72667]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:48:22 np0005601226 lvm[72667]: VG ceph_vg1 finished
Jan 29 11:48:22 np0005601226 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 29 11:48:23 np0005601226 python3[72745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:48:23 np0005601226 python3[72818]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705303.030498-36201-32636349084389/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:48:24 np0005601226 python3[72868]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:48:24 np0005601226 systemd[1]: Reloading.
Jan 29 11:48:24 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:48:24 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:48:24 np0005601226 systemd[1]: Starting Ceph OSD losetup...
Jan 29 11:48:24 np0005601226 bash[72909]: /dev/loop4: [64513]:4642312 (/var/lib/ceph-osd-1.img)
Jan 29 11:48:24 np0005601226 systemd[1]: Finished Ceph OSD losetup.
Jan 29 11:48:24 np0005601226 lvm[72911]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:48:24 np0005601226 lvm[72911]: VG ceph_vg1 finished
Jan 29 11:48:25 np0005601226 python3[72937]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 11:48:26 np0005601226 python3[72964]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:27 np0005601226 python3[72990]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:27 np0005601226 kernel: loop5: detected capacity change from 0 to 41943040
Jan 29 11:48:27 np0005601226 python3[73021]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:27 np0005601226 lvm[73025]: PV /dev/loop5 not used.
Jan 29 11:48:27 np0005601226 lvm[73027]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:48:27 np0005601226 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 29 11:48:27 np0005601226 lvm[73030]:  1 logical volume(s) in volume group "ceph_vg2" now active
Jan 29 11:48:27 np0005601226 lvm[73039]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:48:27 np0005601226 lvm[73039]: VG ceph_vg2 finished
Jan 29 11:48:27 np0005601226 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 29 11:48:28 np0005601226 python3[73117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:48:28 np0005601226 python3[73190]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705307.9666185-36228-255899945155498/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:48:29 np0005601226 python3[73240]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:48:29 np0005601226 systemd[1]: Reloading.
Jan 29 11:48:29 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:48:29 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:48:29 np0005601226 systemd[1]: Starting Ceph OSD losetup...
Jan 29 11:48:29 np0005601226 bash[73280]: /dev/loop5: [64513]:4660813 (/var/lib/ceph-osd-2.img)
Jan 29 11:48:29 np0005601226 systemd[1]: Finished Ceph OSD losetup.
Jan 29 11:48:29 np0005601226 lvm[73281]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:48:29 np0005601226 lvm[73281]: VG ceph_vg2 finished
Jan 29 11:48:31 np0005601226 python3[73305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:48:33 np0005601226 python3[73398]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 11:48:36 np0005601226 python3[73457]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 29 11:48:41 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 11:48:41 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 11:48:42 np0005601226 python3[73575]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:42 np0005601226 python3[73603]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:42 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 11:48:42 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 11:48:42 np0005601226 systemd[1]: run-r44a9ce75564448afafb32bac7b718e3e.service: Deactivated successfully.
Jan 29 11:48:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:48:43 np0005601226 python3[73643]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:48:43 np0005601226 python3[73669]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:48:44 np0005601226 python3[73747]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:48:44 np0005601226 python3[73820]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705324.3023133-36376-78978518734323/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:48:45 np0005601226 python3[73922]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:48:46 np0005601226 python3[73995]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705325.39813-36394-189714532823637/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:48:46 np0005601226 python3[74045]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:46 np0005601226 python3[74073]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:47 np0005601226 python3[74101]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:47 np0005601226 python3[74127]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:48:48 np0005601226 python3[74153]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:48:48 np0005601226 systemd-logind[823]: New session 19 of user ceph-admin.
Jan 29 11:48:48 np0005601226 systemd[1]: Created slice User Slice of UID 42477.
Jan 29 11:48:48 np0005601226 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 29 11:48:48 np0005601226 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 29 11:48:48 np0005601226 systemd[1]: Starting User Manager for UID 42477...
Jan 29 11:48:48 np0005601226 systemd[74161]: Queued start job for default target Main User Target.
Jan 29 11:48:48 np0005601226 systemd[74161]: Created slice User Application Slice.
Jan 29 11:48:48 np0005601226 systemd[74161]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 29 11:48:48 np0005601226 systemd[74161]: Started Daily Cleanup of User's Temporary Directories.
Jan 29 11:48:48 np0005601226 systemd[74161]: Reached target Paths.
Jan 29 11:48:48 np0005601226 systemd[74161]: Reached target Timers.
Jan 29 11:48:48 np0005601226 systemd[74161]: Starting D-Bus User Message Bus Socket...
Jan 29 11:48:48 np0005601226 systemd[74161]: Starting Create User's Volatile Files and Directories...
Jan 29 11:48:48 np0005601226 systemd[74161]: Listening on D-Bus User Message Bus Socket.
Jan 29 11:48:48 np0005601226 systemd[74161]: Reached target Sockets.
Jan 29 11:48:48 np0005601226 systemd[74161]: Finished Create User's Volatile Files and Directories.
Jan 29 11:48:48 np0005601226 systemd[74161]: Reached target Basic System.
Jan 29 11:48:48 np0005601226 systemd[74161]: Reached target Main User Target.
Jan 29 11:48:48 np0005601226 systemd[74161]: Startup finished in 150ms.
Jan 29 11:48:48 np0005601226 systemd[1]: Started User Manager for UID 42477.
Jan 29 11:48:48 np0005601226 systemd[1]: Started Session 19 of User ceph-admin.
Jan 29 11:48:48 np0005601226 systemd-logind[823]: Session 19 logged out. Waiting for processes to exit.
Jan 29 11:48:48 np0005601226 systemd[1]: session-19.scope: Deactivated successfully.
Jan 29 11:48:48 np0005601226 systemd-logind[823]: Removed session 19.
Jan 29 11:48:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:48:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:48:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-compat4230633707-lower\x2dmapped.mount: Deactivated successfully.
Jan 29 11:48:59 np0005601226 systemd[1]: Stopping User Manager for UID 42477...
Jan 29 11:48:59 np0005601226 systemd[74161]: Activating special unit Exit the Session...
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped target Main User Target.
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped target Basic System.
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped target Paths.
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped target Sockets.
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped target Timers.
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 29 11:48:59 np0005601226 systemd[74161]: Closed D-Bus User Message Bus Socket.
Jan 29 11:48:59 np0005601226 systemd[74161]: Stopped Create User's Volatile Files and Directories.
Jan 29 11:48:59 np0005601226 systemd[74161]: Removed slice User Application Slice.
Jan 29 11:48:59 np0005601226 systemd[74161]: Reached target Shutdown.
Jan 29 11:48:59 np0005601226 systemd[74161]: Finished Exit the Session.
Jan 29 11:48:59 np0005601226 systemd[74161]: Reached target Exit the Session.
Jan 29 11:48:59 np0005601226 systemd[1]: user@42477.service: Deactivated successfully.
Jan 29 11:48:59 np0005601226 systemd[1]: Stopped User Manager for UID 42477.
Jan 29 11:48:59 np0005601226 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 29 11:48:59 np0005601226 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 29 11:48:59 np0005601226 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 29 11:48:59 np0005601226 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 29 11:48:59 np0005601226 systemd[1]: Removed slice User Slice of UID 42477.
Jan 29 11:49:07 np0005601226 podman[74256]: 2026-01-29 16:49:07.431862331 +0000 UTC m=+18.372740178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:07 np0005601226 podman[74335]: 2026-01-29 16:49:07.522293034 +0000 UTC m=+0.062172819 container create c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c (image=quay.io/ceph/ceph:v20, name=cranky_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:49:07 np0005601226 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 29 11:49:07 np0005601226 systemd[1]: Started libpod-conmon-c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c.scope.
Jan 29 11:49:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:07 np0005601226 podman[74335]: 2026-01-29 16:49:07.496488476 +0000 UTC m=+0.036368341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:07 np0005601226 podman[74335]: 2026-01-29 16:49:07.633765777 +0000 UTC m=+0.173645592 container init c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c (image=quay.io/ceph/ceph:v20, name=cranky_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 11:49:07 np0005601226 podman[74335]: 2026-01-29 16:49:07.644328358 +0000 UTC m=+0.184208183 container start c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c (image=quay.io/ceph/ceph:v20, name=cranky_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:07 np0005601226 podman[74335]: 2026-01-29 16:49:07.675819039 +0000 UTC m=+0.215698824 container attach c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c (image=quay.io/ceph/ceph:v20, name=cranky_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 11:49:07 np0005601226 cranky_jemison[74351]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 29 11:49:07 np0005601226 systemd[1]: libpod-c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c.scope: Deactivated successfully.
Jan 29 11:49:07 np0005601226 podman[74335]: 2026-01-29 16:49:07.742272781 +0000 UTC m=+0.282152596 container died c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c (image=quay.io/ceph/ceph:v20, name=cranky_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 11:49:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-16549c211e39e3fc98deac12c57e4f08423fec51d5179ea60b2e9f00e88e606e-merged.mount: Deactivated successfully.
Jan 29 11:49:07 np0005601226 podman[74335]: 2026-01-29 16:49:07.859278442 +0000 UTC m=+0.399158257 container remove c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c (image=quay.io/ceph/ceph:v20, name=cranky_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:07 np0005601226 systemd[1]: libpod-conmon-c6ea271e0a4571fdb097909352419fa083a5560b57c76c304dffc74d3c70346c.scope: Deactivated successfully.
Jan 29 11:49:07 np0005601226 podman[74370]: 2026-01-29 16:49:07.919300833 +0000 UTC m=+0.043815920 container create 1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444 (image=quay.io/ceph/ceph:v20, name=confident_kilby, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 11:49:07 np0005601226 systemd[1]: Started libpod-conmon-1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444.scope.
Jan 29 11:49:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:07 np0005601226 podman[74370]: 2026-01-29 16:49:07.989253269 +0000 UTC m=+0.113768376 container init 1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444 (image=quay.io/ceph/ceph:v20, name=confident_kilby, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 11:49:07 np0005601226 podman[74370]: 2026-01-29 16:49:07.993641436 +0000 UTC m=+0.118156513 container start 1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444 (image=quay.io/ceph/ceph:v20, name=confident_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 11:49:07 np0005601226 podman[74370]: 2026-01-29 16:49:07.897592183 +0000 UTC m=+0.022107280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:07 np0005601226 confident_kilby[74387]: 167 167
Jan 29 11:49:07 np0005601226 systemd[1]: libpod-1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444.scope: Deactivated successfully.
Jan 29 11:49:07 np0005601226 conmon[74387]: conmon 1055fa897801d0b73eea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444.scope/container/memory.events
Jan 29 11:49:08 np0005601226 podman[74370]: 2026-01-29 16:49:08.006527519 +0000 UTC m=+0.131042616 container attach 1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444 (image=quay.io/ceph/ceph:v20, name=confident_kilby, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:49:08 np0005601226 podman[74370]: 2026-01-29 16:49:08.006978781 +0000 UTC m=+0.131493858 container died 1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444 (image=quay.io/ceph/ceph:v20, name=confident_kilby, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 11:49:08 np0005601226 podman[74370]: 2026-01-29 16:49:08.148574157 +0000 UTC m=+0.273089244 container remove 1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444 (image=quay.io/ceph/ceph:v20, name=confident_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 11:49:08 np0005601226 systemd[1]: libpod-conmon-1055fa897801d0b73eea5501a9c6d0e3522099c43ba0be3c9c8e8235982e4444.scope: Deactivated successfully.
Jan 29 11:49:08 np0005601226 podman[74405]: 2026-01-29 16:49:08.234823148 +0000 UTC m=+0.064872291 container create 764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0 (image=quay.io/ceph/ceph:v20, name=thirsty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:08 np0005601226 systemd[1]: Started libpod-conmon-764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0.scope.
Jan 29 11:49:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:08 np0005601226 podman[74405]: 2026-01-29 16:49:08.195588432 +0000 UTC m=+0.025637605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:08 np0005601226 podman[74405]: 2026-01-29 16:49:08.298487006 +0000 UTC m=+0.128536179 container init 764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0 (image=quay.io/ceph/ceph:v20, name=thirsty_allen, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 11:49:08 np0005601226 podman[74405]: 2026-01-29 16:49:08.30240129 +0000 UTC m=+0.132450433 container start 764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0 (image=quay.io/ceph/ceph:v20, name=thirsty_allen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:49:08 np0005601226 podman[74405]: 2026-01-29 16:49:08.309167441 +0000 UTC m=+0.139216584 container attach 764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0 (image=quay.io/ceph/ceph:v20, name=thirsty_allen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:08 np0005601226 thirsty_allen[74421]: AQCEj3tpwTECFBAA41QqWXF/qiKYGTwkaa0MWw==
Jan 29 11:49:08 np0005601226 systemd[1]: libpod-764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0.scope: Deactivated successfully.
Jan 29 11:49:08 np0005601226 podman[74405]: 2026-01-29 16:49:08.338397081 +0000 UTC m=+0.168446224 container died 764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0 (image=quay.io/ceph/ceph:v20, name=thirsty_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 11:49:08 np0005601226 podman[74405]: 2026-01-29 16:49:08.393083819 +0000 UTC m=+0.223132962 container remove 764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0 (image=quay.io/ceph/ceph:v20, name=thirsty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:08 np0005601226 systemd[1]: libpod-conmon-764e4be2ebe000319a1e6be431d8a5dd815fbcda3c7f978857167de49da4eef0.scope: Deactivated successfully.
Jan 29 11:49:08 np0005601226 podman[74442]: 2026-01-29 16:49:08.452842834 +0000 UTC m=+0.042931337 container create f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7 (image=quay.io/ceph/ceph:v20, name=stupefied_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:08 np0005601226 systemd[1]: Started libpod-conmon-f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7.scope.
Jan 29 11:49:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:08 np0005601226 podman[74442]: 2026-01-29 16:49:08.510590004 +0000 UTC m=+0.100678527 container init f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7 (image=quay.io/ceph/ceph:v20, name=stupefied_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 11:49:08 np0005601226 podman[74442]: 2026-01-29 16:49:08.515269748 +0000 UTC m=+0.105358251 container start f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7 (image=quay.io/ceph/ceph:v20, name=stupefied_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 11:49:08 np0005601226 podman[74442]: 2026-01-29 16:49:08.430445336 +0000 UTC m=+0.020533849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:08 np0005601226 podman[74442]: 2026-01-29 16:49:08.528009718 +0000 UTC m=+0.118098271 container attach f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7 (image=quay.io/ceph/ceph:v20, name=stupefied_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:49:08 np0005601226 stupefied_easley[74458]: AQCEj3tpC9yVIBAAOdLcwh3pmqVtSRZ+5GM3fA==
Jan 29 11:49:08 np0005601226 systemd[1]: libpod-f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7.scope: Deactivated successfully.
Jan 29 11:49:08 np0005601226 podman[74442]: 2026-01-29 16:49:08.550129898 +0000 UTC m=+0.140218411 container died f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7 (image=quay.io/ceph/ceph:v20, name=stupefied_easley, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:49:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e6b1b729f4a85899c6471ea0da54ecf59d3f0c38ee55134c00ac986d4f3c0eed-merged.mount: Deactivated successfully.
Jan 29 11:49:08 np0005601226 podman[74442]: 2026-01-29 16:49:08.655883119 +0000 UTC m=+0.245971632 container remove f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7 (image=quay.io/ceph/ceph:v20, name=stupefied_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:08 np0005601226 systemd[1]: libpod-conmon-f9fd4a9760f4c5df4bf565163053fee8977e714f9b88ba4ef7b46f6fd4b836a7.scope: Deactivated successfully.
Jan 29 11:49:08 np0005601226 podman[74478]: 2026-01-29 16:49:08.737870565 +0000 UTC m=+0.061230593 container create de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7 (image=quay.io/ceph/ceph:v20, name=strange_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:08 np0005601226 podman[74478]: 2026-01-29 16:49:08.70132141 +0000 UTC m=+0.024681488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:08 np0005601226 systemd[1]: Started libpod-conmon-de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7.scope.
Jan 29 11:49:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:08 np0005601226 podman[74478]: 2026-01-29 16:49:08.858126413 +0000 UTC m=+0.181486431 container init de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7 (image=quay.io/ceph/ceph:v20, name=strange_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 11:49:08 np0005601226 podman[74478]: 2026-01-29 16:49:08.864893653 +0000 UTC m=+0.188253691 container start de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7 (image=quay.io/ceph/ceph:v20, name=strange_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 11:49:08 np0005601226 podman[74478]: 2026-01-29 16:49:08.88988779 +0000 UTC m=+0.213247898 container attach de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7 (image=quay.io/ceph/ceph:v20, name=strange_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 11:49:08 np0005601226 strange_cartwright[74494]: AQCEj3tpzEyvNRAAUWgPRsF6V/gb3BpVk9QN3g==
Jan 29 11:49:08 np0005601226 systemd[1]: libpod-de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7.scope: Deactivated successfully.
Jan 29 11:49:08 np0005601226 podman[74478]: 2026-01-29 16:49:08.908456535 +0000 UTC m=+0.231816543 container died de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7 (image=quay.io/ceph/ceph:v20, name=strange_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 11:49:09 np0005601226 podman[74478]: 2026-01-29 16:49:09.039452009 +0000 UTC m=+0.362812007 container remove de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7 (image=quay.io/ceph/ceph:v20, name=strange_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:49:09 np0005601226 systemd[1]: libpod-conmon-de0cadf164523a8f89bcda5da2f01da39bdadac30932d085cdbd6aa6f1e5b4d7.scope: Deactivated successfully.
Jan 29 11:49:09 np0005601226 podman[74515]: 2026-01-29 16:49:09.108679126 +0000 UTC m=+0.053234931 container create 00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3 (image=quay.io/ceph/ceph:v20, name=naughty_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 11:49:09 np0005601226 systemd[1]: Started libpod-conmon-00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3.scope.
Jan 29 11:49:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6750136c6d7a1037bb68c886573832886d165817e14e63cb7619456e56880718/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:09 np0005601226 podman[74515]: 2026-01-29 16:49:09.075566852 +0000 UTC m=+0.020122697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:09 np0005601226 podman[74515]: 2026-01-29 16:49:09.183985274 +0000 UTC m=+0.128541089 container init 00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3 (image=quay.io/ceph/ceph:v20, name=naughty_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:09 np0005601226 podman[74515]: 2026-01-29 16:49:09.18981325 +0000 UTC m=+0.134369045 container start 00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3 (image=quay.io/ceph/ceph:v20, name=naughty_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 11:49:09 np0005601226 podman[74515]: 2026-01-29 16:49:09.207945043 +0000 UTC m=+0.152500838 container attach 00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3 (image=quay.io/ceph/ceph:v20, name=naughty_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 11:49:09 np0005601226 naughty_poincare[74531]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 29 11:49:09 np0005601226 naughty_poincare[74531]: setting min_mon_release = tentacle
Jan 29 11:49:09 np0005601226 naughty_poincare[74531]: /usr/bin/monmaptool: set fsid to cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:09 np0005601226 naughty_poincare[74531]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 29 11:49:09 np0005601226 systemd[1]: libpod-00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3.scope: Deactivated successfully.
Jan 29 11:49:09 np0005601226 podman[74515]: 2026-01-29 16:49:09.223080647 +0000 UTC m=+0.167636452 container died 00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3 (image=quay.io/ceph/ceph:v20, name=naughty_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:09 np0005601226 podman[74515]: 2026-01-29 16:49:09.343841398 +0000 UTC m=+0.288397193 container remove 00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3 (image=quay.io/ceph/ceph:v20, name=naughty_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:49:09 np0005601226 systemd[1]: libpod-conmon-00e63cecd2126fbae34fd6f4ffe972df12494d03a1880d36d71c4971667f7cf3.scope: Deactivated successfully.
Jan 29 11:49:09 np0005601226 podman[74551]: 2026-01-29 16:49:09.429458681 +0000 UTC m=+0.064050399 container create 22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e (image=quay.io/ceph/ceph:v20, name=nostalgic_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 11:49:09 np0005601226 systemd[1]: Started libpod-conmon-22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e.scope.
Jan 29 11:49:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4d6b9ba0093c3cfb952bbcfc3faf522a57add0b24923c91463422391425a48/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4d6b9ba0093c3cfb952bbcfc3faf522a57add0b24923c91463422391425a48/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4d6b9ba0093c3cfb952bbcfc3faf522a57add0b24923c91463422391425a48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4d6b9ba0093c3cfb952bbcfc3faf522a57add0b24923c91463422391425a48/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:09 np0005601226 podman[74551]: 2026-01-29 16:49:09.394387476 +0000 UTC m=+0.028979244 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:09 np0005601226 podman[74551]: 2026-01-29 16:49:09.504465302 +0000 UTC m=+0.139057060 container init 22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e (image=quay.io/ceph/ceph:v20, name=nostalgic_edison, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 11:49:09 np0005601226 podman[74551]: 2026-01-29 16:49:09.509506006 +0000 UTC m=+0.144097734 container start 22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e (image=quay.io/ceph/ceph:v20, name=nostalgic_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 11:49:09 np0005601226 podman[74551]: 2026-01-29 16:49:09.521493256 +0000 UTC m=+0.156084974 container attach 22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e (image=quay.io/ceph/ceph:v20, name=nostalgic_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 11:49:09 np0005601226 systemd[1]: libpod-22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e.scope: Deactivated successfully.
Jan 29 11:49:09 np0005601226 podman[74551]: 2026-01-29 16:49:09.637166992 +0000 UTC m=+0.271758700 container died 22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e (image=quay.io/ceph/ceph:v20, name=nostalgic_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1d4d6b9ba0093c3cfb952bbcfc3faf522a57add0b24923c91463422391425a48-merged.mount: Deactivated successfully.
Jan 29 11:49:09 np0005601226 podman[74551]: 2026-01-29 16:49:09.691308305 +0000 UTC m=+0.325900033 container remove 22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e (image=quay.io/ceph/ceph:v20, name=nostalgic_edison, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 29 11:49:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:09 np0005601226 systemd[1]: libpod-conmon-22f04f1dfbbf055bf4950dd2cfd32f18c8926fb84e521c3536445b2b71da654e.scope: Deactivated successfully.
Jan 29 11:49:09 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:09 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:09 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:10 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:10 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:10 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:10 np0005601226 systemd[1]: Reached target All Ceph clusters and services.
Jan 29 11:49:10 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:10 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:10 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:10 np0005601226 systemd[1]: Reached target Ceph cluster cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:49:10 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:10 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:10 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:10 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:10 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:10 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:11 np0005601226 systemd[1]: Created slice Slice /system/ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:49:11 np0005601226 systemd[1]: Reached target System Time Set.
Jan 29 11:49:11 np0005601226 systemd[1]: Reached target System Time Synchronized.
Jan 29 11:49:11 np0005601226 systemd[1]: Starting Ceph mon.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:49:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:11 np0005601226 podman[74853]: 2026-01-29 16:49:11.31718537 +0000 UTC m=+0.072868544 container create b527039d5a17dd6f9d2c21434fd29da4030fefa91ddaa5e42b212a8d9eb79873 (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 11:49:11 np0005601226 podman[74853]: 2026-01-29 16:49:11.272396446 +0000 UTC m=+0.028079650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ca4109f9bdf5fdc4d1f1b2563052033998b5c5b76c98a6837a52ae62e3505c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ca4109f9bdf5fdc4d1f1b2563052033998b5c5b76c98a6837a52ae62e3505c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ca4109f9bdf5fdc4d1f1b2563052033998b5c5b76c98a6837a52ae62e3505c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ca4109f9bdf5fdc4d1f1b2563052033998b5c5b76c98a6837a52ae62e3505c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:11 np0005601226 podman[74853]: 2026-01-29 16:49:11.468570879 +0000 UTC m=+0.224254053 container init b527039d5a17dd6f9d2c21434fd29da4030fefa91ddaa5e42b212a8d9eb79873 (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:11 np0005601226 podman[74853]: 2026-01-29 16:49:11.477321882 +0000 UTC m=+0.233005046 container start b527039d5a17dd6f9d2c21434fd29da4030fefa91ddaa5e42b212a8d9eb79873 (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 11:49:11 np0005601226 bash[74853]: b527039d5a17dd6f9d2c21434fd29da4030fefa91ddaa5e42b212a8d9eb79873
Jan 29 11:49:11 np0005601226 systemd[1]: Started Ceph mon.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: pidfile_write: ignore empty --pid-file
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: load: jerasure load: lrc 
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Git sha 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: DB SUMMARY
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: DB Session ID:  YPN1GY9PQICNKQAZUTZS
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                                     Options.env: 0x563f44e11440
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                                Options.info_log: 0x563f474b93e0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                                 Options.wal_dir: 
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                    Options.write_buffer_manager: 0x563f47438140
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                               Options.row_cache: None
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                              Options.wal_filter: None
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.wal_compression: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.max_background_jobs: 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.max_total_wal_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:       Options.compaction_readahead_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Compression algorithms supported:
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kZSTD supported: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:           Options.merge_operator: 
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:        Options.compaction_filter: None
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563f47444700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563f474298d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:        Options.write_buffer_size: 33554432
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:  Options.max_write_buffer_number: 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:          Options.compression: NoCompression
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.num_levels: 7
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: affa2982-d59d-4189-b5dd-817a80fada55
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705351534805, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705351555871, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "YPN1GY9PQICNKQAZUTZS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705351556088, "job": 1, "event": "recovery_finished"}
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 29 11:49:11 np0005601226 podman[74874]: 2026-01-29 16:49:11.588270481 +0000 UTC m=+0.074229261 container create f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3 (image=quay.io/ceph/ceph:v20, name=trusting_lehmann, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563f47456e00
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: DB pointer 0x563f475a2000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.021       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563f474298d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 7.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@-1(???) e0 preinit fsid cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 29 11:49:11 np0005601226 podman[74874]: 2026-01-29 16:49:11.541839773 +0000 UTC m=+0.027798553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : fsid cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : last_changed 2026-01-29T16:49:09.219895+0000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : created 2026-01-29T16:49:09.219895+0000
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-29T16:49:09.550487Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).mds e1 new map
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-01-29T16:49:11:643666+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : fsmap 
Jan 29 11:49:11 np0005601226 systemd[1]: Started libpod-conmon-f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3.scope.
Jan 29 11:49:11 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74a6f2ae37d881f99fc8d6b33fa2539e0670a5f8020974ad74ab99d24342382/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74a6f2ae37d881f99fc8d6b33fa2539e0670a5f8020974ad74ab99d24342382/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74a6f2ae37d881f99fc8d6b33fa2539e0670a5f8020974ad74ab99d24342382/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 29 11:49:11 np0005601226 podman[74874]: 2026-01-29 16:49:11.726325404 +0000 UTC m=+0.212284184 container init f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3 (image=quay.io/ceph/ceph:v20, name=trusting_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mkfs cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 29 11:49:11 np0005601226 podman[74874]: 2026-01-29 16:49:11.733481854 +0000 UTC m=+0.219440594 container start f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3 (image=quay.io/ceph/ceph:v20, name=trusting_lehmann, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 29 11:49:11 np0005601226 podman[74874]: 2026-01-29 16:49:11.749667265 +0000 UTC m=+0.235626095 container attach f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3 (image=quay.io/ceph/ceph:v20, name=trusting_lehmann, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 11:49:11 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 handle_auth_request failed to assign global_id
Jan 29 11:49:12 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 29 11:49:12 np0005601226 ceph-mon[74873]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960201917' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:  cluster:
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    id:     cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    health: HEALTH_OK
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]: 
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:  services:
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    mon: 1 daemons, quorum compute-0 (age 0.484184s) [leader: compute-0]
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    mgr: no daemons active
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    osd: 0 osds: 0 up, 0 in
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]: 
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:  data:
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    pools:   0 pools, 0 pgs
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    objects: 0 objects, 0 B
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    usage:   0 B used, 0 B / 0 B avail
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]:    pgs:     
Jan 29 11:49:12 np0005601226 trusting_lehmann[74928]: 
Jan 29 11:49:12 np0005601226 systemd[1]: libpod-f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3.scope: Deactivated successfully.
Jan 29 11:49:12 np0005601226 podman[74874]: 2026-01-29 16:49:12.142428862 +0000 UTC m=+0.628387622 container died f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3 (image=quay.io/ceph/ceph:v20, name=trusting_lehmann, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:49:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e74a6f2ae37d881f99fc8d6b33fa2539e0670a5f8020974ad74ab99d24342382-merged.mount: Deactivated successfully.
Jan 29 11:49:12 np0005601226 podman[74874]: 2026-01-29 16:49:12.241083783 +0000 UTC m=+0.727042523 container remove f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3 (image=quay.io/ceph/ceph:v20, name=trusting_lehmann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:12 np0005601226 systemd[1]: libpod-conmon-f85232ccfc3ad3274040ea9ca62b656ff6a1c5692da67d52f18805102b5cd0d3.scope: Deactivated successfully.
Jan 29 11:49:12 np0005601226 podman[74967]: 2026-01-29 16:49:12.375755694 +0000 UTC m=+0.114294719 container create 629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c (image=quay.io/ceph/ceph:v20, name=vigilant_montalcini, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:12 np0005601226 podman[74967]: 2026-01-29 16:49:12.283662999 +0000 UTC m=+0.022202024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:12 np0005601226 systemd[1]: Started libpod-conmon-629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c.scope.
Jan 29 11:49:12 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78351e2fa1840a10be93e128362d1d0678c21716e991f00424021414eb13b10f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78351e2fa1840a10be93e128362d1d0678c21716e991f00424021414eb13b10f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78351e2fa1840a10be93e128362d1d0678c21716e991f00424021414eb13b10f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78351e2fa1840a10be93e128362d1d0678c21716e991f00424021414eb13b10f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:12 np0005601226 podman[74967]: 2026-01-29 16:49:12.537585681 +0000 UTC m=+0.276124806 container init 629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c (image=quay.io/ceph/ceph:v20, name=vigilant_montalcini, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:49:12 np0005601226 podman[74967]: 2026-01-29 16:49:12.54538646 +0000 UTC m=+0.283925525 container start 629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c (image=quay.io/ceph/ceph:v20, name=vigilant_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:12 np0005601226 podman[74967]: 2026-01-29 16:49:12.549680534 +0000 UTC m=+0.288219599 container attach 629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c (image=quay.io/ceph/ceph:v20, name=vigilant_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:12 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 29 11:49:12 np0005601226 ceph-mon[74873]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/169733517' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 11:49:12 np0005601226 ceph-mon[74873]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/169733517' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 29 11:49:12 np0005601226 vigilant_montalcini[74983]: 
Jan 29 11:49:12 np0005601226 vigilant_montalcini[74983]: [global]
Jan 29 11:49:12 np0005601226 vigilant_montalcini[74983]: #011fsid = cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:12 np0005601226 vigilant_montalcini[74983]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 29 11:49:12 np0005601226 vigilant_montalcini[74983]: #011osd_crush_chooseleaf_type = 0
Jan 29 11:49:12 np0005601226 ceph-mon[74873]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 11:49:12 np0005601226 ceph-mon[74873]: from='client.? 192.168.122.100:0/169733517' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 11:49:12 np0005601226 systemd[1]: libpod-629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c.scope: Deactivated successfully.
Jan 29 11:49:12 np0005601226 podman[74967]: 2026-01-29 16:49:12.868185849 +0000 UTC m=+0.606724914 container died 629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c (image=quay.io/ceph/ceph:v20, name=vigilant_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 11:49:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-78351e2fa1840a10be93e128362d1d0678c21716e991f00424021414eb13b10f-merged.mount: Deactivated successfully.
Jan 29 11:49:12 np0005601226 podman[74967]: 2026-01-29 16:49:12.928867807 +0000 UTC m=+0.667406832 container remove 629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c (image=quay.io/ceph/ceph:v20, name=vigilant_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 11:49:12 np0005601226 systemd[1]: libpod-conmon-629786293fdffbb69b3bf93b9ef1c4f9c06e5edeb55d9e956b386027d10fae3c.scope: Deactivated successfully.
Jan 29 11:49:13 np0005601226 podman[75022]: 2026-01-29 16:49:12.988162199 +0000 UTC m=+0.032661811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:13 np0005601226 podman[75022]: 2026-01-29 16:49:13.266471233 +0000 UTC m=+0.310970825 container create a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87 (image=quay.io/ceph/ceph:v20, name=loving_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:13 np0005601226 systemd[1]: Started libpod-conmon-a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87.scope.
Jan 29 11:49:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2edac9a14bbfc38d102879ae17f0e1e7666e07874c40c28a6290bea6bb049b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2edac9a14bbfc38d102879ae17f0e1e7666e07874c40c28a6290bea6bb049b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2edac9a14bbfc38d102879ae17f0e1e7666e07874c40c28a6290bea6bb049b6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2edac9a14bbfc38d102879ae17f0e1e7666e07874c40c28a6290bea6bb049b6/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:13 np0005601226 podman[75022]: 2026-01-29 16:49:13.423595883 +0000 UTC m=+0.468095526 container init a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87 (image=quay.io/ceph/ceph:v20, name=loving_blackwell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:13 np0005601226 podman[75022]: 2026-01-29 16:49:13.43098674 +0000 UTC m=+0.475486352 container start a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87 (image=quay.io/ceph/ceph:v20, name=loving_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:13 np0005601226 podman[75022]: 2026-01-29 16:49:13.475147219 +0000 UTC m=+0.519646881 container attach a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87 (image=quay.io/ceph/ceph:v20, name=loving_blackwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:49:13 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:49:13 np0005601226 ceph-mon[74873]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3632576672' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:49:13 np0005601226 systemd[1]: libpod-a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87.scope: Deactivated successfully.
Jan 29 11:49:13 np0005601226 podman[75022]: 2026-01-29 16:49:13.639924693 +0000 UTC m=+0.684424275 container died a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87 (image=quay.io/ceph/ceph:v20, name=loving_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:49:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c2edac9a14bbfc38d102879ae17f0e1e7666e07874c40c28a6290bea6bb049b6-merged.mount: Deactivated successfully.
Jan 29 11:49:13 np0005601226 ceph-mon[74873]: from='client.? 192.168.122.100:0/169733517' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 29 11:49:13 np0005601226 podman[75022]: 2026-01-29 16:49:13.906689929 +0000 UTC m=+0.951189491 container remove a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87 (image=quay.io/ceph/ceph:v20, name=loving_blackwell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:13 np0005601226 systemd[1]: libpod-conmon-a26cfd4dec44fab7a924c3fa2b73cb809c8592e34984288d6d7f2ac44789bb87.scope: Deactivated successfully.
Jan 29 11:49:13 np0005601226 systemd[1]: Stopping Ceph mon.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:49:14 np0005601226 ceph-mon[74873]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 29 11:49:14 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 29 11:49:14 np0005601226 ceph-mon[74873]: mon.compute-0@0(leader) e1 shutdown
Jan 29 11:49:14 np0005601226 ceph-mon[74873]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 11:49:14 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0[74869]: 2026-01-29T16:49:14.132+0000 7f55c5745640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 29 11:49:14 np0005601226 ceph-mon[74873]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 11:49:14 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0[74869]: 2026-01-29T16:49:14.132+0000 7f55c5745640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 29 11:49:14 np0005601226 podman[75109]: 2026-01-29 16:49:14.184072277 +0000 UTC m=+0.112161753 container died b527039d5a17dd6f9d2c21434fd29da4030fefa91ddaa5e42b212a8d9eb79873 (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 29 11:49:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a5ca4109f9bdf5fdc4d1f1b2563052033998b5c5b76c98a6837a52ae62e3505c-merged.mount: Deactivated successfully.
Jan 29 11:49:14 np0005601226 podman[75109]: 2026-01-29 16:49:14.263828514 +0000 UTC m=+0.191918010 container remove b527039d5a17dd6f9d2c21434fd29da4030fefa91ddaa5e42b212a8d9eb79873 (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:14 np0005601226 bash[75109]: ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0
Jan 29 11:49:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 29 11:49:14 np0005601226 systemd[1]: ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mon.compute-0.service: Deactivated successfully.
Jan 29 11:49:14 np0005601226 systemd[1]: Stopped Ceph mon.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:49:14 np0005601226 systemd[1]: ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mon.compute-0.service: Consumed 1.087s CPU time.
Jan 29 11:49:14 np0005601226 systemd[1]: Starting Ceph mon.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:49:14 np0005601226 podman[75213]: 2026-01-29 16:49:14.671946639 +0000 UTC m=+0.058628174 container create 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 11:49:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035058073a3e838cabe70fb863be1778fbe299cbfb659409c66eb1b64394a58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035058073a3e838cabe70fb863be1778fbe299cbfb659409c66eb1b64394a58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035058073a3e838cabe70fb863be1778fbe299cbfb659409c66eb1b64394a58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9035058073a3e838cabe70fb863be1778fbe299cbfb659409c66eb1b64394a58/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:14 np0005601226 podman[75213]: 2026-01-29 16:49:14.638628761 +0000 UTC m=+0.025310336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:14 np0005601226 podman[75213]: 2026-01-29 16:49:14.756503435 +0000 UTC m=+0.143185060 container init 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:14 np0005601226 podman[75213]: 2026-01-29 16:49:14.76192986 +0000 UTC m=+0.148611435 container start 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:14 np0005601226 bash[75213]: 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b
Jan 29 11:49:14 np0005601226 systemd[1]: Started Ceph mon.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: pidfile_write: ignore empty --pid-file
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: load: jerasure load: lrc 
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Git sha 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: DB SUMMARY
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: DB Session ID:  3LVBT2JQJ5HZ0LRVKGW6
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 61637 ; 
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                                     Options.env: 0x55d2b1fb3440
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                                Options.info_log: 0x55d2b3239e80
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                                 Options.wal_dir: 
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                    Options.write_buffer_manager: 0x55d2b3284140
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                               Options.row_cache: None
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                              Options.wal_filter: None
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.wal_compression: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.max_background_jobs: 2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.max_total_wal_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:       Options.compaction_readahead_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Compression algorithms supported:
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kZSTD supported: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:           Options.merge_operator: 
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:        Options.compaction_filter: None
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d2b3290a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d2b32758d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:        Options.write_buffer_size: 33554432
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:  Options.max_write_buffer_number: 2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:          Options.compression: NoCompression
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.num_levels: 7
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: affa2982-d59d-4189-b5dd-817a80fada55
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705354817582, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705354829034, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 61254, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 150, "table_properties": {"data_size": 59713, "index_size": 183, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3459, "raw_average_key_size": 30, "raw_value_size": 56992, "raw_average_value_size": 504, "num_data_blocks": 9, "num_entries": 113, "num_filter_entries": 113, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705354, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705354829185, "job": 1, "event": "recovery_finished"}
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d2b32a2e00
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: DB pointer 0x55d2b33ec000
Jan 29 11:49:14 np0005601226 podman[75234]: 2026-01-29 16:49:14.856991116 +0000 UTC m=+0.065006746 container create 34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4 (image=quay.io/ceph/ceph:v20, name=upbeat_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   61.72 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Sum      2/0   61.72 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 1.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 1.41 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d2b32758d0#2 capacity: 512.00 MB usage: 1.81 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 8.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.38 KB,7.15256e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???) e1 preinit fsid cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???).mds e1 new map
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-01-29T16:49:11:643666+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : fsid cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : last_changed 2026-01-29T16:49:09.219895+0000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : created 2026-01-29T16:49:09.219895+0000
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : fsmap 
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 29 11:49:14 np0005601226 systemd[1]: Started libpod-conmon-34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4.scope.
Jan 29 11:49:14 np0005601226 podman[75234]: 2026-01-29 16:49:14.822240118 +0000 UTC m=+0.030255798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:14 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6c444fb686e66e9c204d8dd52b432fe26c5ae61f61be4b1096b318dc6f6d8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6c444fb686e66e9c204d8dd52b432fe26c5ae61f61be4b1096b318dc6f6d8b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6c444fb686e66e9c204d8dd52b432fe26c5ae61f61be4b1096b318dc6f6d8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:14 np0005601226 ceph-mon[75233]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 29 11:49:14 np0005601226 podman[75234]: 2026-01-29 16:49:14.979587395 +0000 UTC m=+0.187603045 container init 34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4 (image=quay.io/ceph/ceph:v20, name=upbeat_heisenberg, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:14 np0005601226 podman[75234]: 2026-01-29 16:49:14.99027399 +0000 UTC m=+0.198289660 container start 34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4 (image=quay.io/ceph/ceph:v20, name=upbeat_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:14 np0005601226 podman[75234]: 2026-01-29 16:49:14.998740116 +0000 UTC m=+0.206755796 container attach 34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4 (image=quay.io/ceph/ceph:v20, name=upbeat_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 29 11:49:15 np0005601226 systemd[1]: libpod-34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4.scope: Deactivated successfully.
Jan 29 11:49:15 np0005601226 podman[75234]: 2026-01-29 16:49:15.23023568 +0000 UTC m=+0.438251330 container died 34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4 (image=quay.io/ceph/ceph:v20, name=upbeat_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 11:49:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fa6c444fb686e66e9c204d8dd52b432fe26c5ae61f61be4b1096b318dc6f6d8b-merged.mount: Deactivated successfully.
Jan 29 11:49:15 np0005601226 podman[75234]: 2026-01-29 16:49:15.550980125 +0000 UTC m=+0.758995755 container remove 34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4 (image=quay.io/ceph/ceph:v20, name=upbeat_heisenberg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:49:15 np0005601226 systemd[1]: libpod-conmon-34c30ae664685db79b880d8e7479ad5e2dfccccba5f3588a599128af0ae29eb4.scope: Deactivated successfully.
Jan 29 11:49:15 np0005601226 podman[75328]: 2026-01-29 16:49:15.596939791 +0000 UTC m=+0.025410919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:15 np0005601226 podman[75328]: 2026-01-29 16:49:15.704464439 +0000 UTC m=+0.132935557 container create b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743 (image=quay.io/ceph/ceph:v20, name=serene_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 11:49:15 np0005601226 systemd[1]: Started libpod-conmon-b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743.scope.
Jan 29 11:49:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31584eae436ea5e581c567f2c3374c14131f0cf16e9eafc1bc1ed191bd53aa59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31584eae436ea5e581c567f2c3374c14131f0cf16e9eafc1bc1ed191bd53aa59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31584eae436ea5e581c567f2c3374c14131f0cf16e9eafc1bc1ed191bd53aa59/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:15 np0005601226 podman[75328]: 2026-01-29 16:49:15.970932996 +0000 UTC m=+0.399404124 container init b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743 (image=quay.io/ceph/ceph:v20, name=serene_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 11:49:15 np0005601226 podman[75328]: 2026-01-29 16:49:15.97930368 +0000 UTC m=+0.407774788 container start b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743 (image=quay.io/ceph/ceph:v20, name=serene_spence, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 29 11:49:16 np0005601226 systemd[1]: libpod-b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743.scope: Deactivated successfully.
Jan 29 11:49:16 np0005601226 podman[75328]: 2026-01-29 16:49:16.190900403 +0000 UTC m=+0.619371601 container attach b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743 (image=quay.io/ceph/ceph:v20, name=serene_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:16 np0005601226 podman[75328]: 2026-01-29 16:49:16.191559071 +0000 UTC m=+0.620030249 container died b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743 (image=quay.io/ceph/ceph:v20, name=serene_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 11:49:16 np0005601226 systemd[1]: var-lib-containers-storage-overlay-31584eae436ea5e581c567f2c3374c14131f0cf16e9eafc1bc1ed191bd53aa59-merged.mount: Deactivated successfully.
Jan 29 11:49:16 np0005601226 podman[75328]: 2026-01-29 16:49:16.914091023 +0000 UTC m=+1.342562131 container remove b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743 (image=quay.io/ceph/ceph:v20, name=serene_spence, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:16 np0005601226 systemd[1]: libpod-conmon-b3ac126c79f0ff84c3e95c4be2dea4edab643033048c67e22034388fb2e98743.scope: Deactivated successfully.
Jan 29 11:49:17 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:17 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:17 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:17 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:17 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:17 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:17 np0005601226 systemd[1]: Starting Ceph mgr.compute-0.zvopdr for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:49:18 np0005601226 podman[75508]: 2026-01-29 16:49:17.976691484 +0000 UTC m=+0.029196769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:18 np0005601226 podman[75508]: 2026-01-29 16:49:18.12949504 +0000 UTC m=+0.182000335 container create 931753d3ff18feabfb1dd48ad1f249e4a934e902db3f6d900f5c1c6c0fcfed9c (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 11:49:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8144594bfdcc07d81e1b331a79b6f11b516754490ee1fd426f9392f9e5a1df3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8144594bfdcc07d81e1b331a79b6f11b516754490ee1fd426f9392f9e5a1df3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8144594bfdcc07d81e1b331a79b6f11b516754490ee1fd426f9392f9e5a1df3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8144594bfdcc07d81e1b331a79b6f11b516754490ee1fd426f9392f9e5a1df3c/merged/var/lib/ceph/mgr/ceph-compute-0.zvopdr supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:18 np0005601226 podman[75508]: 2026-01-29 16:49:18.240916171 +0000 UTC m=+0.293421446 container init 931753d3ff18feabfb1dd48ad1f249e4a934e902db3f6d900f5c1c6c0fcfed9c (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 11:49:18 np0005601226 podman[75508]: 2026-01-29 16:49:18.245049041 +0000 UTC m=+0.297554316 container start 931753d3ff18feabfb1dd48ad1f249e4a934e902db3f6d900f5c1c6c0fcfed9c (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:49:18 np0005601226 ceph-mgr[75527]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:49:18 np0005601226 ceph-mgr[75527]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 29 11:49:18 np0005601226 ceph-mgr[75527]: pidfile_write: ignore empty --pid-file
Jan 29 11:49:18 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'alerts'
Jan 29 11:49:18 np0005601226 bash[75508]: 931753d3ff18feabfb1dd48ad1f249e4a934e902db3f6d900f5c1c6c0fcfed9c
Jan 29 11:49:18 np0005601226 systemd[1]: Started Ceph mgr.compute-0.zvopdr for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:49:18 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'balancer'
Jan 29 11:49:18 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'cephadm'
Jan 29 11:49:18 np0005601226 podman[75548]: 2026-01-29 16:49:18.571098208 +0000 UTC m=+0.043813349 container create 4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c (image=quay.io/ceph/ceph:v20, name=suspicious_kowalevski, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 29 11:49:18 np0005601226 systemd[1]: Started libpod-conmon-4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c.scope.
Jan 29 11:49:18 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a099ebced5320db11c9bf29cc4b4aeea44233babeab5af003681eb6b1ebfa86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a099ebced5320db11c9bf29cc4b4aeea44233babeab5af003681eb6b1ebfa86/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a099ebced5320db11c9bf29cc4b4aeea44233babeab5af003681eb6b1ebfa86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:18 np0005601226 podman[75548]: 2026-01-29 16:49:18.548790053 +0000 UTC m=+0.021505174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:18 np0005601226 podman[75548]: 2026-01-29 16:49:18.66903183 +0000 UTC m=+0.141746951 container init 4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c (image=quay.io/ceph/ceph:v20, name=suspicious_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:49:18 np0005601226 podman[75548]: 2026-01-29 16:49:18.676853879 +0000 UTC m=+0.149568980 container start 4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c (image=quay.io/ceph/ceph:v20, name=suspicious_kowalevski, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:18 np0005601226 podman[75548]: 2026-01-29 16:49:18.694321525 +0000 UTC m=+0.167036646 container attach 4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c (image=quay.io/ceph/ceph:v20, name=suspicious_kowalevski, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 11:49:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035896255' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]: 
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]: {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "health": {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "status": "HEALTH_OK",
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "checks": {},
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "mutes": []
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    },
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "election_epoch": 5,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "quorum": [
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        0
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    ],
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "quorum_names": [
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "compute-0"
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    ],
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "quorum_age": 4,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "monmap": {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "epoch": 1,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "min_mon_release_name": "tentacle",
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_mons": 1
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    },
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "osdmap": {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "epoch": 1,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_osds": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_up_osds": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "osd_up_since": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_in_osds": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "osd_in_since": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_remapped_pgs": 0
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    },
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "pgmap": {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "pgs_by_state": [],
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_pgs": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_pools": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_objects": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "data_bytes": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "bytes_used": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "bytes_avail": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "bytes_total": 0
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    },
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "fsmap": {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "epoch": 1,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "btime": "2026-01-29T16:49:11:643666+0000",
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "by_rank": [],
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "up:standby": 0
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    },
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "mgrmap": {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "available": false,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "num_standbys": 0,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "modules": [
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:            "iostat",
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:            "nfs"
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        ],
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "services": {}
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    },
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "servicemap": {
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "epoch": 1,
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "modified": "2026-01-29T16:49:11.647926+0000",
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:        "services": {}
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    },
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]:    "progress_events": {}
Jan 29 11:49:18 np0005601226 suspicious_kowalevski[75564]: }
Jan 29 11:49:18 np0005601226 systemd[1]: libpod-4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c.scope: Deactivated successfully.
Jan 29 11:49:18 np0005601226 podman[75590]: 2026-01-29 16:49:18.942664249 +0000 UTC m=+0.023720744 container died 4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c (image=quay.io/ceph/ceph:v20, name=suspicious_kowalevski, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:49:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6a099ebced5320db11c9bf29cc4b4aeea44233babeab5af003681eb6b1ebfa86-merged.mount: Deactivated successfully.
Jan 29 11:49:19 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'crash'
Jan 29 11:49:19 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'dashboard'
Jan 29 11:49:19 np0005601226 podman[75590]: 2026-01-29 16:49:19.560617821 +0000 UTC m=+0.641674326 container remove 4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c (image=quay.io/ceph/ceph:v20, name=suspicious_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:19 np0005601226 systemd[1]: libpod-conmon-4e9b4700c7d1f6fbf6e923d586989ef89037b4c6ed64ce33e9f6f3b7e46f430c.scope: Deactivated successfully.
Jan 29 11:49:20 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'devicehealth'
Jan 29 11:49:20 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'diskprediction_local'
Jan 29 11:49:20 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 29 11:49:20 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 29 11:49:20 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]:  from numpy import show_config as show_numpy_config
Jan 29 11:49:20 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'influx'
Jan 29 11:49:20 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'insights'
Jan 29 11:49:20 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'iostat'
Jan 29 11:49:20 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'k8sevents'
Jan 29 11:49:20 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'localpool'
Jan 29 11:49:21 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'mds_autoscaler'
Jan 29 11:49:21 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'mirroring'
Jan 29 11:49:21 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'nfs'
Jan 29 11:49:21 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'orchestrator'
Jan 29 11:49:21 np0005601226 podman[75616]: 2026-01-29 16:49:21.622293497 +0000 UTC m=+0.029943117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:21 np0005601226 podman[75616]: 2026-01-29 16:49:21.74128228 +0000 UTC m=+0.148931920 container create 8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2 (image=quay.io/ceph/ceph:v20, name=elegant_dewdney, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:21 np0005601226 systemd[1]: Started libpod-conmon-8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2.scope.
Jan 29 11:49:21 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c228bbaded5f60885d60ee2dee84455ef63189189f775f1c903f4acbbda4651f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c228bbaded5f60885d60ee2dee84455ef63189189f775f1c903f4acbbda4651f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c228bbaded5f60885d60ee2dee84455ef63189189f775f1c903f4acbbda4651f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:21 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'osd_perf_query'
Jan 29 11:49:21 np0005601226 podman[75616]: 2026-01-29 16:49:21.847343354 +0000 UTC m=+0.254992954 container init 8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2 (image=quay.io/ceph/ceph:v20, name=elegant_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:49:21 np0005601226 podman[75616]: 2026-01-29 16:49:21.851409129 +0000 UTC m=+0.259058729 container start 8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2 (image=quay.io/ceph/ceph:v20, name=elegant_dewdney, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:21 np0005601226 podman[75616]: 2026-01-29 16:49:21.866401814 +0000 UTC m=+0.274051404 container attach 8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2 (image=quay.io/ceph/ceph:v20, name=elegant_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 11:49:21 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'osd_support'
Jan 29 11:49:21 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'pg_autoscaler'
Jan 29 11:49:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 11:49:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1222010224' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]: 
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]: {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "health": {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "status": "HEALTH_OK",
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "checks": {},
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "mutes": []
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    },
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "election_epoch": 5,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "quorum": [
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        0
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    ],
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "quorum_names": [
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "compute-0"
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    ],
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "quorum_age": 7,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "monmap": {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "epoch": 1,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "min_mon_release_name": "tentacle",
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_mons": 1
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    },
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "osdmap": {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "epoch": 1,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_osds": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_up_osds": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "osd_up_since": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_in_osds": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "osd_in_since": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_remapped_pgs": 0
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    },
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "pgmap": {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "pgs_by_state": [],
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_pgs": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_pools": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_objects": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "data_bytes": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "bytes_used": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "bytes_avail": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "bytes_total": 0
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    },
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "fsmap": {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "epoch": 1,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "btime": "2026-01-29T16:49:11:643666+0000",
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "by_rank": [],
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "up:standby": 0
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    },
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "mgrmap": {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "available": false,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "num_standbys": 0,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "modules": [
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:            "iostat",
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:            "nfs"
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        ],
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "services": {}
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    },
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "servicemap": {
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "epoch": 1,
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "modified": "2026-01-29T16:49:11.647926+0000",
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:        "services": {}
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    },
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]:    "progress_events": {}
Jan 29 11:49:22 np0005601226 elegant_dewdney[75630]: }
Jan 29 11:49:22 np0005601226 systemd[1]: libpod-8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2.scope: Deactivated successfully.
Jan 29 11:49:22 np0005601226 podman[75616]: 2026-01-29 16:49:22.030778611 +0000 UTC m=+0.438428241 container died 8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2 (image=quay.io/ceph/ceph:v20, name=elegant_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:22 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'progress'
Jan 29 11:49:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c228bbaded5f60885d60ee2dee84455ef63189189f775f1c903f4acbbda4651f-merged.mount: Deactivated successfully.
Jan 29 11:49:22 np0005601226 podman[75616]: 2026-01-29 16:49:22.129761916 +0000 UTC m=+0.537411516 container remove 8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2 (image=quay.io/ceph/ceph:v20, name=elegant_dewdney, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 11:49:22 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'prometheus'
Jan 29 11:49:22 np0005601226 systemd[1]: libpod-conmon-8f5446e5b1662b50f725322e3a305787381e18a9c59a1effbcbeb8e1f7bb26b2.scope: Deactivated successfully.
Jan 29 11:49:22 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'rbd_support'
Jan 29 11:49:22 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'rgw'
Jan 29 11:49:22 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'rook'
Jan 29 11:49:23 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'selftest'
Jan 29 11:49:23 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'smb'
Jan 29 11:49:23 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'snap_schedule'
Jan 29 11:49:23 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'stats'
Jan 29 11:49:23 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'status'
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'telegraf'
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'telemetry'
Jan 29 11:49:24 np0005601226 podman[75670]: 2026-01-29 16:49:24.248725574 +0000 UTC m=+0.101046069 container create 666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae (image=quay.io/ceph/ceph:v20, name=upbeat_wozniak, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:24 np0005601226 podman[75670]: 2026-01-29 16:49:24.171232315 +0000 UTC m=+0.023552840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'test_orchestrator'
Jan 29 11:49:24 np0005601226 systemd[1]: Started libpod-conmon-666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae.scope.
Jan 29 11:49:24 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308588b81df5445ee44798d96b5b5bd09925a26f02127257bb593388a2d75a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308588b81df5445ee44798d96b5b5bd09925a26f02127257bb593388a2d75a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308588b81df5445ee44798d96b5b5bd09925a26f02127257bb593388a2d75a0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:24 np0005601226 podman[75670]: 2026-01-29 16:49:24.354784387 +0000 UTC m=+0.207104942 container init 666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae (image=quay.io/ceph/ceph:v20, name=upbeat_wozniak, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:24 np0005601226 podman[75670]: 2026-01-29 16:49:24.362100273 +0000 UTC m=+0.214420758 container start 666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae (image=quay.io/ceph/ceph:v20, name=upbeat_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:24 np0005601226 podman[75670]: 2026-01-29 16:49:24.369421649 +0000 UTC m=+0.221742214 container attach 666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae (image=quay.io/ceph/ceph:v20, name=upbeat_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'volumes'
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/215703287' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]: 
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]: {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "health": {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "status": "HEALTH_OK",
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "checks": {},
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "mutes": []
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    },
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "election_epoch": 5,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "quorum": [
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        0
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    ],
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "quorum_names": [
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "compute-0"
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    ],
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "quorum_age": 9,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "monmap": {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "epoch": 1,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "min_mon_release_name": "tentacle",
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_mons": 1
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    },
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "osdmap": {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "epoch": 1,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_osds": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_up_osds": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "osd_up_since": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_in_osds": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "osd_in_since": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_remapped_pgs": 0
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    },
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "pgmap": {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "pgs_by_state": [],
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_pgs": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_pools": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_objects": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "data_bytes": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "bytes_used": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "bytes_avail": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "bytes_total": 0
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    },
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "fsmap": {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "epoch": 1,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "btime": "2026-01-29T16:49:11:643666+0000",
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "by_rank": [],
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "up:standby": 0
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    },
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "mgrmap": {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "available": false,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "num_standbys": 0,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "modules": [
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:            "iostat",
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:            "nfs"
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        ],
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "services": {}
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    },
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "servicemap": {
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "epoch": 1,
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "modified": "2026-01-29T16:49:11.647926+0000",
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:        "services": {}
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    },
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]:    "progress_events": {}
Jan 29 11:49:24 np0005601226 upbeat_wozniak[75686]: }
Jan 29 11:49:24 np0005601226 systemd[1]: libpod-666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae.scope: Deactivated successfully.
Jan 29 11:49:24 np0005601226 podman[75670]: 2026-01-29 16:49:24.59624571 +0000 UTC m=+0.448566185 container died 666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae (image=quay.io/ceph/ceph:v20, name=upbeat_wozniak, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: ms_deliver_dispatch: unhandled message 0x5613bf4d9860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zvopdr
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr handle_mgr_map Activating!
Jan 29 11:49:24 np0005601226 systemd[1]: var-lib-containers-storage-overlay-308588b81df5445ee44798d96b5b5bd09925a26f02127257bb593388a2d75a0e-merged.mount: Deactivated successfully.
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr handle_mgr_map I am now activating
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.zvopdr(active, starting, since 0.0839708s)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mds metadata"} : dispatch
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e1 all = 1
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mon metadata"} : dispatch
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zvopdr", "id": "compute-0.zvopdr"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mgr metadata", "who": "compute-0.zvopdr", "id": "compute-0.zvopdr"} : dispatch
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: balancer
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: crash
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [balancer INFO root] Starting
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: devicehealth
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Manager daemon compute-0.zvopdr is now available
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:49:24
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [balancer INFO root] No pools available
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] Starting
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: iostat
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: nfs
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: orchestrator
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: pg_autoscaler
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: progress
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [progress INFO root] Loading...
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [progress INFO root] No stored events to load
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [progress INFO root] Loaded [] historic events
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [progress INFO root] Loaded OSDMap, ready.
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] recovery thread starting
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] starting setup
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: rbd_support
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: status
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/mirror_snapshot_schedule"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/mirror_snapshot_schedule"} : dispatch
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: telemetry
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] PerfHandler: starting
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TaskHandler: starting
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/trash_purge_schedule"} v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/trash_purge_schedule"} : dispatch
Jan 29 11:49:24 np0005601226 podman[75670]: 2026-01-29 16:49:24.920754964 +0000 UTC m=+0.773075459 container remove 666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae (image=quay.io/ceph/ceph:v20, name=upbeat_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] setup complete
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 29 11:49:24 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: volumes
Jan 29 11:49:24 np0005601226 systemd[1]: libpod-conmon-666f5df9bff0cc0dec0405acceb957f334ebc2c3a8dee5bbe113a854d963fbae.scope: Deactivated successfully.
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 29 11:49:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:25 np0005601226 ceph-mon[75233]: Activating manager daemon compute-0.zvopdr
Jan 29 11:49:25 np0005601226 ceph-mon[75233]: Manager daemon compute-0.zvopdr is now available
Jan 29 11:49:25 np0005601226 ceph-mon[75233]: from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/mirror_snapshot_schedule"} : dispatch
Jan 29 11:49:25 np0005601226 ceph-mon[75233]: from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/trash_purge_schedule"} : dispatch
Jan 29 11:49:25 np0005601226 ceph-mon[75233]: from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:25 np0005601226 ceph-mon[75233]: from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:25 np0005601226 ceph-mon[75233]: from='mgr.14102 192.168.122.100:0/991239197' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.zvopdr(active, since 1.5791s)
Jan 29 11:49:26 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:26 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:26 np0005601226 podman[75802]: 2026-01-29 16:49:26.995883696 +0000 UTC m=+0.056218231 container create 5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815 (image=quay.io/ceph/ceph:v20, name=nostalgic_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:27 np0005601226 systemd[1]: Started libpod-conmon-5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815.scope.
Jan 29 11:49:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333088ed710c0c798745c3f47e578a22aa2f8411c11f7dd42e3a87eeeb817622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333088ed710c0c798745c3f47e578a22aa2f8411c11f7dd42e3a87eeeb817622/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333088ed710c0c798745c3f47e578a22aa2f8411c11f7dd42e3a87eeeb817622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:27 np0005601226 podman[75802]: 2026-01-29 16:49:26.971851462 +0000 UTC m=+0.032186017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:27 np0005601226 podman[75802]: 2026-01-29 16:49:27.206038841 +0000 UTC m=+0.266373426 container init 5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815 (image=quay.io/ceph/ceph:v20, name=nostalgic_hopper, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:27 np0005601226 podman[75802]: 2026-01-29 16:49:27.212392828 +0000 UTC m=+0.272727393 container start 5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815 (image=quay.io/ceph/ceph:v20, name=nostalgic_hopper, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 11:49:27 np0005601226 podman[75802]: 2026-01-29 16:49:27.661732247 +0000 UTC m=+0.722066812 container attach 5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815 (image=quay.io/ceph/ceph:v20, name=nostalgic_hopper, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 29 11:49:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2167795108' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]: 
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]: {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "health": {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "status": "HEALTH_OK",
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "checks": {},
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "mutes": []
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    },
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "election_epoch": 5,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "quorum": [
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        0
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    ],
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "quorum_names": [
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "compute-0"
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    ],
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "quorum_age": 12,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "monmap": {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "epoch": 1,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "min_mon_release_name": "tentacle",
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_mons": 1
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    },
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "osdmap": {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "epoch": 1,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_osds": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_up_osds": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "osd_up_since": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_in_osds": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "osd_in_since": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_remapped_pgs": 0
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    },
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "pgmap": {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "pgs_by_state": [],
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_pgs": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_pools": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_objects": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "data_bytes": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "bytes_used": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "bytes_avail": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "bytes_total": 0
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    },
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "fsmap": {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "epoch": 1,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "btime": "2026-01-29T16:49:11:643666+0000",
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "by_rank": [],
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "up:standby": 0
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    },
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "mgrmap": {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "available": true,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "num_standbys": 0,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "modules": [
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:            "iostat",
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:            "nfs"
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        ],
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "services": {}
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    },
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "servicemap": {
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "epoch": 1,
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "modified": "2026-01-29T16:49:11.647926+0000",
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:        "services": {}
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    },
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]:    "progress_events": {}
Jan 29 11:49:27 np0005601226 nostalgic_hopper[75818]: }
Jan 29 11:49:27 np0005601226 systemd[1]: libpod-5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815.scope: Deactivated successfully.
Jan 29 11:49:27 np0005601226 podman[75802]: 2026-01-29 16:49:27.755433877 +0000 UTC m=+0.815768412 container died 5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815 (image=quay.io/ceph/ceph:v20, name=nostalgic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.zvopdr(active, since 3s)
Jan 29 11:49:28 np0005601226 systemd[1]: var-lib-containers-storage-overlay-333088ed710c0c798745c3f47e578a22aa2f8411c11f7dd42e3a87eeeb817622-merged.mount: Deactivated successfully.
Jan 29 11:49:28 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:28 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:29 np0005601226 podman[75802]: 2026-01-29 16:49:29.162131878 +0000 UTC m=+2.222466463 container remove 5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815 (image=quay.io/ceph/ceph:v20, name=nostalgic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 11:49:29 np0005601226 systemd[1]: libpod-conmon-5dedcb9d3b102b4b94bfbd35b280b5db0b010c789df98e1f16253a96da12f815.scope: Deactivated successfully.
Jan 29 11:49:29 np0005601226 podman[75856]: 2026-01-29 16:49:29.207146272 +0000 UTC m=+0.023491228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:29 np0005601226 podman[75856]: 2026-01-29 16:49:29.401377263 +0000 UTC m=+0.217722189 container create bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57 (image=quay.io/ceph/ceph:v20, name=nice_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:29 np0005601226 systemd[1]: Started libpod-conmon-bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57.scope.
Jan 29 11:49:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413e4864e0fb35c4ea227f3f5b416c8f540b7f0ff6758692f27377009b86e875/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413e4864e0fb35c4ea227f3f5b416c8f540b7f0ff6758692f27377009b86e875/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413e4864e0fb35c4ea227f3f5b416c8f540b7f0ff6758692f27377009b86e875/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/413e4864e0fb35c4ea227f3f5b416c8f540b7f0ff6758692f27377009b86e875/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:29 np0005601226 podman[75856]: 2026-01-29 16:49:29.944588937 +0000 UTC m=+0.760933913 container init bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57 (image=quay.io/ceph/ceph:v20, name=nice_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:29 np0005601226 podman[75856]: 2026-01-29 16:49:29.952287705 +0000 UTC m=+0.768632661 container start bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57 (image=quay.io/ceph/ceph:v20, name=nice_kalam, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 11:49:29 np0005601226 podman[75856]: 2026-01-29 16:49:29.985664259 +0000 UTC m=+0.802009215 container attach bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57 (image=quay.io/ceph/ceph:v20, name=nice_kalam, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 29 11:49:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3949070037' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 11:49:30 np0005601226 nice_kalam[75872]: 
Jan 29 11:49:30 np0005601226 nice_kalam[75872]: [global]
Jan 29 11:49:30 np0005601226 nice_kalam[75872]: #011fsid = cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:49:30 np0005601226 nice_kalam[75872]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 29 11:49:30 np0005601226 nice_kalam[75872]: #011osd_crush_chooseleaf_type = 0
Jan 29 11:49:30 np0005601226 systemd[1]: libpod-bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57.scope: Deactivated successfully.
Jan 29 11:49:30 np0005601226 podman[75856]: 2026-01-29 16:49:30.362625307 +0000 UTC m=+1.178970253 container died bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57 (image=quay.io/ceph/ceph:v20, name=nice_kalam, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:30 np0005601226 systemd[1]: var-lib-containers-storage-overlay-413e4864e0fb35c4ea227f3f5b416c8f540b7f0ff6758692f27377009b86e875-merged.mount: Deactivated successfully.
Jan 29 11:49:30 np0005601226 podman[75856]: 2026-01-29 16:49:30.406109182 +0000 UTC m=+1.222454098 container remove bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57 (image=quay.io/ceph/ceph:v20, name=nice_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:30 np0005601226 systemd[1]: libpod-conmon-bece035da6d069df5b6127ca7169527652ec56710c4e257f9ec2693282972f57.scope: Deactivated successfully.
Jan 29 11:49:30 np0005601226 podman[75912]: 2026-01-29 16:49:30.468342408 +0000 UTC m=+0.042220767 container create 8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad (image=quay.io/ceph/ceph:v20, name=naughty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:49:30 np0005601226 systemd[1]: Started libpod-conmon-8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad.scope.
Jan 29 11:49:30 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cae01f72659e962e64633ad15bb3056223ad09c9a37ee3f1d5b4cc063c394fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cae01f72659e962e64633ad15bb3056223ad09c9a37ee3f1d5b4cc063c394fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cae01f72659e962e64633ad15bb3056223ad09c9a37ee3f1d5b4cc063c394fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:30 np0005601226 podman[75912]: 2026-01-29 16:49:30.544570618 +0000 UTC m=+0.118449007 container init 8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad (image=quay.io/ceph/ceph:v20, name=naughty_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:30 np0005601226 podman[75912]: 2026-01-29 16:49:30.45060714 +0000 UTC m=+0.024485559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:30 np0005601226 podman[75912]: 2026-01-29 16:49:30.548953254 +0000 UTC m=+0.122831593 container start 8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad (image=quay.io/ceph/ceph:v20, name=naughty_yonath, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 11:49:30 np0005601226 podman[75912]: 2026-01-29 16:49:30.553458913 +0000 UTC m=+0.127337272 container attach 8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad (image=quay.io/ceph/ceph:v20, name=naughty_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:30 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3949070037' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 11:49:30 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:30 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 29 11:49:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4241872414' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 29 11:49:31 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/4241872414' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 29 11:49:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4241872414' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 29 11:49:32 np0005601226 ceph-mgr[75527]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 29 11:49:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.zvopdr(active, since 7s)
Jan 29 11:49:32 np0005601226 systemd[1]: libpod-8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad.scope: Deactivated successfully.
Jan 29 11:49:32 np0005601226 podman[75912]: 2026-01-29 16:49:32.082697937 +0000 UTC m=+1.656576316 container died 8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad (image=quay.io/ceph/ceph:v20, name=naughty_yonath, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:32 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: ignoring --setuser ceph since I am not root
Jan 29 11:49:32 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: ignoring --setgroup ceph since I am not root
Jan 29 11:49:32 np0005601226 ceph-mgr[75527]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 29 11:49:32 np0005601226 ceph-mgr[75527]: pidfile_write: ignore empty --pid-file
Jan 29 11:49:32 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'alerts'
Jan 29 11:49:32 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'balancer'
Jan 29 11:49:32 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'cephadm'
Jan 29 11:49:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3cae01f72659e962e64633ad15bb3056223ad09c9a37ee3f1d5b4cc063c394fc-merged.mount: Deactivated successfully.
Jan 29 11:49:32 np0005601226 podman[75912]: 2026-01-29 16:49:32.445913181 +0000 UTC m=+2.019791560 container remove 8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad (image=quay.io/ceph/ceph:v20, name=naughty_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 11:49:32 np0005601226 systemd[1]: libpod-conmon-8b90f77dd846dfa1168dcad12a8ff1a1c6b0140e506d1beeb5ddcf9db9cfcbad.scope: Deactivated successfully.
Jan 29 11:49:32 np0005601226 podman[75988]: 2026-01-29 16:49:32.495410592 +0000 UTC m=+0.031105723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:32 np0005601226 podman[75988]: 2026-01-29 16:49:32.75732894 +0000 UTC m=+0.293024091 container create 04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d (image=quay.io/ceph/ceph:v20, name=musing_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:32 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'crash'
Jan 29 11:49:33 np0005601226 systemd[1]: Started libpod-conmon-04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d.scope.
Jan 29 11:49:33 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'dashboard'
Jan 29 11:49:33 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5dc3d4744d9dcbe5410bd9debfaa1be0fa8b5b5f45afbeb8d469f385236ef2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5dc3d4744d9dcbe5410bd9debfaa1be0fa8b5b5f45afbeb8d469f385236ef2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b5dc3d4744d9dcbe5410bd9debfaa1be0fa8b5b5f45afbeb8d469f385236ef2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:33 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/4241872414' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 29 11:49:33 np0005601226 podman[75988]: 2026-01-29 16:49:33.09815294 +0000 UTC m=+0.633848091 container init 04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d (image=quay.io/ceph/ceph:v20, name=musing_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:33 np0005601226 podman[75988]: 2026-01-29 16:49:33.103160414 +0000 UTC m=+0.638855525 container start 04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d (image=quay.io/ceph/ceph:v20, name=musing_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 11:49:33 np0005601226 podman[75988]: 2026-01-29 16:49:33.11303793 +0000 UTC m=+0.648733061 container attach 04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d (image=quay.io/ceph/ceph:v20, name=musing_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 29 11:49:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3959096791' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 11:49:33 np0005601226 musing_haibt[76015]: {
Jan 29 11:49:33 np0005601226 musing_haibt[76015]:    "epoch": 5,
Jan 29 11:49:33 np0005601226 musing_haibt[76015]:    "available": true,
Jan 29 11:49:33 np0005601226 musing_haibt[76015]:    "active_name": "compute-0.zvopdr",
Jan 29 11:49:33 np0005601226 musing_haibt[76015]:    "num_standby": 0
Jan 29 11:49:33 np0005601226 musing_haibt[76015]: }
Jan 29 11:49:33 np0005601226 systemd[1]: libpod-04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d.scope: Deactivated successfully.
Jan 29 11:49:33 np0005601226 podman[75988]: 2026-01-29 16:49:33.554271657 +0000 UTC m=+1.089966758 container died 04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d (image=quay.io/ceph/ceph:v20, name=musing_haibt, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:33 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'devicehealth'
Jan 29 11:49:33 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9b5dc3d4744d9dcbe5410bd9debfaa1be0fa8b5b5f45afbeb8d469f385236ef2-merged.mount: Deactivated successfully.
Jan 29 11:49:33 np0005601226 podman[75988]: 2026-01-29 16:49:33.778404354 +0000 UTC m=+1.314099455 container remove 04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d (image=quay.io/ceph/ceph:v20, name=musing_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 11:49:33 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'diskprediction_local'
Jan 29 11:49:33 np0005601226 systemd[1]: libpod-conmon-04e00b23ab0cc9855e4dd5e3fc0419c3aafa9f835fd72f006f123bee1200144d.scope: Deactivated successfully.
Jan 29 11:49:33 np0005601226 podman[76053]: 2026-01-29 16:49:33.849824555 +0000 UTC m=+0.055709904 container create 9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483 (image=quay.io/ceph/ceph:v20, name=focused_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:33 np0005601226 systemd[1]: Started libpod-conmon-9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483.scope.
Jan 29 11:49:33 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f695a554d5a09ce6ccdceb92a9436a602675d9139be6bce3986b0433529a260/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f695a554d5a09ce6ccdceb92a9436a602675d9139be6bce3986b0433529a260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f695a554d5a09ce6ccdceb92a9436a602675d9139be6bce3986b0433529a260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:33 np0005601226 podman[76053]: 2026-01-29 16:49:33.8205406 +0000 UTC m=+0.026425959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:33 np0005601226 podman[76053]: 2026-01-29 16:49:33.93074023 +0000 UTC m=+0.136625569 container init 9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483 (image=quay.io/ceph/ceph:v20, name=focused_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:33 np0005601226 podman[76053]: 2026-01-29 16:49:33.938402958 +0000 UTC m=+0.144288297 container start 9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483 (image=quay.io/ceph/ceph:v20, name=focused_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 29 11:49:33 np0005601226 podman[76053]: 2026-01-29 16:49:33.943511245 +0000 UTC m=+0.149396664 container attach 9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483 (image=quay.io/ceph/ceph:v20, name=focused_lewin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 11:49:33 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 29 11:49:33 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 29 11:49:33 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]:  from numpy import show_config as show_numpy_config
Jan 29 11:49:33 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'influx'
Jan 29 11:49:34 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'insights'
Jan 29 11:49:34 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'iostat'
Jan 29 11:49:34 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'k8sevents'
Jan 29 11:49:34 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'localpool'
Jan 29 11:49:34 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'mds_autoscaler'
Jan 29 11:49:34 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'mirroring'
Jan 29 11:49:34 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'nfs'
Jan 29 11:49:35 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'orchestrator'
Jan 29 11:49:35 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'osd_perf_query'
Jan 29 11:49:35 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'osd_support'
Jan 29 11:49:35 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'pg_autoscaler'
Jan 29 11:49:35 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'progress'
Jan 29 11:49:35 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'prometheus'
Jan 29 11:49:36 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'rbd_support'
Jan 29 11:49:36 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'rgw'
Jan 29 11:49:36 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'rook'
Jan 29 11:49:36 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'selftest'
Jan 29 11:49:36 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'smb'
Jan 29 11:49:37 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'snap_schedule'
Jan 29 11:49:37 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'stats'
Jan 29 11:49:37 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'status'
Jan 29 11:49:37 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'telegraf'
Jan 29 11:49:37 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'telemetry'
Jan 29 11:49:37 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'test_orchestrator'
Jan 29 11:49:37 np0005601226 ceph-mgr[75527]: mgr[py] Loading python module 'volumes'
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Active manager daemon compute-0.zvopdr restarted
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.zvopdr
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: ms_deliver_dispatch: unhandled message 0x55b0d4d80000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: mgr handle_mgr_map Activating!
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: mgr handle_mgr_map I am now activating
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.zvopdr(active, starting, since 0.143826s)
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.zvopdr", "id": "compute-0.zvopdr"} v 0)
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mgr metadata", "who": "compute-0.zvopdr", "id": "compute-0.zvopdr"} : dispatch
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mds metadata"} : dispatch
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e1 all = 1
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata"} : dispatch
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mon metadata"} : dispatch
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: balancer
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Starting
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Manager daemon compute-0.zvopdr is now available
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:49:38
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:49:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] No pools available
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: Active manager daemon compute-0.zvopdr restarted
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: Activating manager daemon compute-0.zvopdr
Jan 29 11:49:38 np0005601226 ceph-mon[75233]: Manager daemon compute-0.zvopdr is now available
Jan 29 11:49:39 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 29 11:49:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.zvopdr(active, since 1.14961s)
Jan 29 11:49:39 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 29 11:49:39 np0005601226 focused_lewin[76070]: {
Jan 29 11:49:39 np0005601226 focused_lewin[76070]:    "mgrmap_epoch": 7,
Jan 29 11:49:39 np0005601226 focused_lewin[76070]:    "initialized": true
Jan 29 11:49:39 np0005601226 focused_lewin[76070]: }
Jan 29 11:49:39 np0005601226 systemd[1]: libpod-9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483.scope: Deactivated successfully.
Jan 29 11:49:39 np0005601226 podman[76053]: 2026-01-29 16:49:39.356764282 +0000 UTC m=+5.562649621 container died 9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483 (image=quay.io/ceph/ceph:v20, name=focused_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:39 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8f695a554d5a09ce6ccdceb92a9436a602675d9139be6bce3986b0433529a260-merged.mount: Deactivated successfully.
Jan 29 11:49:39 np0005601226 podman[76053]: 2026-01-29 16:49:39.392479497 +0000 UTC m=+5.598364836 container remove 9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483 (image=quay.io/ceph/ceph:v20, name=focused_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 11:49:39 np0005601226 systemd[1]: libpod-conmon-9f0741367ed434d009cf0a3de055e861952786cb0eeafdc53c8fc67067ead483.scope: Deactivated successfully.
Jan 29 11:49:39 np0005601226 podman[76140]: 2026-01-29 16:49:39.469775099 +0000 UTC m=+0.055281101 container create 72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221 (image=quay.io/ceph/ceph:v20, name=youthful_hypatia, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:49:39 np0005601226 systemd[1]: Started libpod-conmon-72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221.scope.
Jan 29 11:49:39 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:39 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bdbf2d35c069491e2d0203b1175bf786bafdd105410e633156de69ec5d7753/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:39 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bdbf2d35c069491e2d0203b1175bf786bafdd105410e633156de69ec5d7753/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:39 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bdbf2d35c069491e2d0203b1175bf786bafdd105410e633156de69ec5d7753/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:39 np0005601226 podman[76140]: 2026-01-29 16:49:39.528166927 +0000 UTC m=+0.113672949 container init 72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221 (image=quay.io/ceph/ceph:v20, name=youthful_hypatia, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 11:49:39 np0005601226 podman[76140]: 2026-01-29 16:49:39.532328926 +0000 UTC m=+0.117834928 container start 72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221 (image=quay.io/ceph/ceph:v20, name=youthful_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 11:49:39 np0005601226 podman[76140]: 2026-01-29 16:49:39.544625707 +0000 UTC m=+0.130131709 container attach 72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221 (image=quay.io/ceph/ceph:v20, name=youthful_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 11:49:39 np0005601226 podman[76140]: 2026-01-29 16:49:39.451211825 +0000 UTC m=+0.036717857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019906299 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:49:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 29 11:49:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3884057858' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3884057858' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3884057858' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 29 11:49:40 np0005601226 youthful_hypatia[76157]: module 'orchestrator' is already enabled (always-on)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.zvopdr(active, since 2s)
Jan 29 11:49:40 np0005601226 podman[76140]: 2026-01-29 16:49:40.344741672 +0000 UTC m=+0.930247684 container died 72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221 (image=quay.io/ceph/ceph:v20, name=youthful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:40 np0005601226 systemd[1]: libpod-72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221.scope: Deactivated successfully.
Jan 29 11:49:40 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d6bdbf2d35c069491e2d0203b1175bf786bafdd105410e633156de69ec5d7753-merged.mount: Deactivated successfully.
Jan 29 11:49:40 np0005601226 podman[76140]: 2026-01-29 16:49:40.377060513 +0000 UTC m=+0.962566515 container remove 72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221 (image=quay.io/ceph/ceph:v20, name=youthful_hypatia, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:40 np0005601226 systemd[1]: libpod-conmon-72d666a55566891d956423f0c503f960fb5384361cfeeb203f0ac089db89a221.scope: Deactivated successfully.
Jan 29 11:49:40 np0005601226 podman[76196]: 2026-01-29 16:49:40.442674984 +0000 UTC m=+0.049502333 container create 23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 29 11:49:40 np0005601226 systemd[1]: Started libpod-conmon-23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7.scope.
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:40 np0005601226 podman[76196]: 2026-01-29 16:49:40.416605357 +0000 UTC m=+0.023432786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:40 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: cephadm
Jan 29 11:49:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49618fb96bfd4ac07b1428c9b1d691225466284cc66218b9fc74c71a42e9bf2d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49618fb96bfd4ac07b1428c9b1d691225466284cc66218b9fc74c71a42e9bf2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49618fb96bfd4ac07b1428c9b1d691225466284cc66218b9fc74c71a42e9bf2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: crash
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: devicehealth
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: iostat
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: nfs
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] Starting
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: orchestrator
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: pg_autoscaler
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: progress
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [progress INFO root] Loading...
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [progress INFO root] No stored events to load
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [progress INFO root] Loaded [] historic events
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [progress INFO root] Loaded OSDMap, ready.
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] recovery thread starting
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] starting setup
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: rbd_support
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: status
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: telemetry
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/mirror_snapshot_schedule"} v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/mirror_snapshot_schedule"} : dispatch
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] PerfHandler: starting
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TaskHandler: starting
Jan 29 11:49:40 np0005601226 podman[76196]: 2026-01-29 16:49:40.547754516 +0000 UTC m=+0.154581835 container init 23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7 (image=quay.io/ceph/ceph:v20, name=zen_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/trash_purge_schedule"} v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/trash_purge_schedule"} : dispatch
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] setup complete
Jan 29 11:49:40 np0005601226 podman[76196]: 2026-01-29 16:49:40.553537705 +0000 UTC m=+0.160365024 container start 23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7 (image=quay.io/ceph/ceph:v20, name=zen_elion, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:40 np0005601226 podman[76196]: 2026-01-29 16:49:40.556485926 +0000 UTC m=+0.163313435 container attach 23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: mgr load Constructed class from module: volumes
Jan 29 11:49:40 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 11:49:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 11:49:40 np0005601226 systemd[1]: libpod-23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7.scope: Deactivated successfully.
Jan 29 11:49:40 np0005601226 podman[76196]: 2026-01-29 16:49:40.994161724 +0000 UTC m=+0.600989043 container died 23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7 (image=quay.io/ceph/ceph:v20, name=zen_elion, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay-49618fb96bfd4ac07b1428c9b1d691225466284cc66218b9fc74c71a42e9bf2d-merged.mount: Deactivated successfully.
Jan 29 11:49:41 np0005601226 podman[76196]: 2026-01-29 16:49:41.041376716 +0000 UTC m=+0.648204035 container remove 23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7 (image=quay.io/ceph/ceph:v20, name=zen_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 11:49:41 np0005601226 systemd[1]: libpod-conmon-23052805cb7732e970aafc666a10c047a99a08f74dd49a84254bc6114b60c1c7.scope: Deactivated successfully.
Jan 29 11:49:41 np0005601226 podman[76328]: 2026-01-29 16:49:41.097105661 +0000 UTC m=+0.038989668 container create 2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:41 np0005601226 systemd[1]: Started libpod-conmon-2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551.scope.
Jan 29 11:49:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1486fbc2b0d5e5a8b837a28c05256276b656baa5e3a891cdd223fe60b4c36a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1486fbc2b0d5e5a8b837a28c05256276b656baa5e3a891cdd223fe60b4c36a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f1486fbc2b0d5e5a8b837a28c05256276b656baa5e3a891cdd223fe60b4c36a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 podman[76328]: 2026-01-29 16:49:41.077083131 +0000 UTC m=+0.018967148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:41 np0005601226 podman[76328]: 2026-01-29 16:49:41.179966846 +0000 UTC m=+0.121850863 container init 2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:41 np0005601226 podman[76328]: 2026-01-29 16:49:41.18528406 +0000 UTC m=+0.127168057 container start 2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:49:41 np0005601226 podman[76328]: 2026-01-29 16:49:41.194574577 +0000 UTC m=+0.136458614 container attach 2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3884057858' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: Found migration_current of "None". Setting to last migration.
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/mirror_snapshot_schedule"} : dispatch
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zvopdr/trash_purge_schedule"} : dispatch
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Jan/2026:16:49:41] ENGINE Bus STARTING
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Jan/2026:16:49:41] ENGINE Bus STARTING
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Jan/2026:16:49:41] ENGINE Serving on https://192.168.122.100:7150
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Jan/2026:16:49:41] ENGINE Serving on https://192.168.122.100:7150
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_user
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Jan/2026:16:49:41] ENGINE Client ('192.168.122.100', 59498) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Jan/2026:16:49:41] ENGINE Client ('192.168.122.100', 59498) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_config
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 29 11:49:41 np0005601226 nostalgic_diffie[76345]: ssh user set to ceph-admin. sudo will be used
Jan 29 11:49:41 np0005601226 systemd[1]: libpod-2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551.scope: Deactivated successfully.
Jan 29 11:49:41 np0005601226 podman[76328]: 2026-01-29 16:49:41.663235694 +0000 UTC m=+0.605119701 container died 2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1f1486fbc2b0d5e5a8b837a28c05256276b656baa5e3a891cdd223fe60b4c36a-merged.mount: Deactivated successfully.
Jan 29 11:49:41 np0005601226 podman[76328]: 2026-01-29 16:49:41.718271657 +0000 UTC m=+0.660155654 container remove 2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551 (image=quay.io/ceph/ceph:v20, name=nostalgic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Jan/2026:16:49:41] ENGINE Serving on http://192.168.122.100:8765
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Jan/2026:16:49:41] ENGINE Serving on http://192.168.122.100:8765
Jan 29 11:49:41 np0005601226 systemd[1]: libpod-conmon-2499413e439856b9b9c7dd7e5a613d6989035b7cfbe192a50b0a8603959f5551.scope: Deactivated successfully.
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: [cephadm INFO cherrypy.error] [29/Jan/2026:16:49:41] ENGINE Bus STARTED
Jan 29 11:49:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : [29/Jan/2026:16:49:41] ENGINE Bus STARTED
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 11:49:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 11:49:41 np0005601226 podman[76407]: 2026-01-29 16:49:41.780497154 +0000 UTC m=+0.046029166 container create dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd (image=quay.io/ceph/ceph:v20, name=magical_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 11:49:41 np0005601226 systemd[1]: Started libpod-conmon-dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd.scope.
Jan 29 11:49:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48e4fc0f02d2c7ce4373dc4d71fb7255aa2d8ecce0bba73247a6cc1c426b2f1/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48e4fc0f02d2c7ce4373dc4d71fb7255aa2d8ecce0bba73247a6cc1c426b2f1/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48e4fc0f02d2c7ce4373dc4d71fb7255aa2d8ecce0bba73247a6cc1c426b2f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48e4fc0f02d2c7ce4373dc4d71fb7255aa2d8ecce0bba73247a6cc1c426b2f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e48e4fc0f02d2c7ce4373dc4d71fb7255aa2d8ecce0bba73247a6cc1c426b2f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:41 np0005601226 podman[76407]: 2026-01-29 16:49:41.758275896 +0000 UTC m=+0.023807888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:41 np0005601226 podman[76407]: 2026-01-29 16:49:41.856661591 +0000 UTC m=+0.122193583 container init dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd (image=quay.io/ceph/ceph:v20, name=magical_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:41 np0005601226 podman[76407]: 2026-01-29 16:49:41.861970966 +0000 UTC m=+0.127502928 container start dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd (image=quay.io/ceph/ceph:v20, name=magical_benz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:41 np0005601226 podman[76407]: 2026-01-29 16:49:41.869374634 +0000 UTC m=+0.134906626 container attach dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd (image=quay.io/ceph/ceph:v20, name=magical_benz, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Set ssh private key
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 29 11:49:42 np0005601226 systemd[1]: libpod-dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd.scope: Deactivated successfully.
Jan 29 11:49:42 np0005601226 podman[76407]: 2026-01-29 16:49:42.280078247 +0000 UTC m=+0.545610209 container died dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd (image=quay.io/ceph/ceph:v20, name=magical_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e48e4fc0f02d2c7ce4373dc4d71fb7255aa2d8ecce0bba73247a6cc1c426b2f1-merged.mount: Deactivated successfully.
Jan 29 11:49:42 np0005601226 podman[76407]: 2026-01-29 16:49:42.311254942 +0000 UTC m=+0.576786904 container remove dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd (image=quay.io/ceph/ceph:v20, name=magical_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:42 np0005601226 systemd[1]: libpod-conmon-dfd79e2c94f20d5e9eb4ec639a4cbe8ae180480bb17492d0649c3be30c5f7ddd.scope: Deactivated successfully.
Jan 29 11:49:42 np0005601226 podman[76461]: 2026-01-29 16:49:42.370926979 +0000 UTC m=+0.041581138 container create f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026 (image=quay.io/ceph/ceph:v20, name=bold_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 29 11:49:42 np0005601226 systemd[1]: Started libpod-conmon-f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026.scope.
Jan 29 11:49:42 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f447a8e7f163577199a37ec84f67913aac6f63fe392884115e29181af8c51bc/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f447a8e7f163577199a37ec84f67913aac6f63fe392884115e29181af8c51bc/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f447a8e7f163577199a37ec84f67913aac6f63fe392884115e29181af8c51bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f447a8e7f163577199a37ec84f67913aac6f63fe392884115e29181af8c51bc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f447a8e7f163577199a37ec84f67913aac6f63fe392884115e29181af8c51bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:42 np0005601226 podman[76461]: 2026-01-29 16:49:42.353452578 +0000 UTC m=+0.024106717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:42 np0005601226 podman[76461]: 2026-01-29 16:49:42.525337948 +0000 UTC m=+0.195992077 container init f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026 (image=quay.io/ceph/ceph:v20, name=bold_ramanujan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:42 np0005601226 podman[76461]: 2026-01-29 16:49:42.530824168 +0000 UTC m=+0.201478317 container start f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026 (image=quay.io/ceph/ceph:v20, name=bold_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:42 np0005601226 podman[76461]: 2026-01-29 16:49:42.599481914 +0000 UTC m=+0.270136083 container attach f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026 (image=quay.io/ceph/ceph:v20, name=bold_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: [29/Jan/2026:16:49:41] ENGINE Bus STARTING
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: [29/Jan/2026:16:49:41] ENGINE Serving on https://192.168.122.100:7150
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: Set ssh ssh_user
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: [29/Jan/2026:16:49:41] ENGINE Client ('192.168.122.100', 59498) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: Set ssh ssh_config
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: ssh user set to ceph-admin. sudo will be used
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: [29/Jan/2026:16:49:41] ENGINE Serving on http://192.168.122.100:8765
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: [29/Jan/2026:16:49:41] ENGINE Bus STARTED
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.zvopdr(active, since 4s)
Jan 29 11:49:42 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 29 11:49:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:43 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 29 11:49:43 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 29 11:49:43 np0005601226 systemd[1]: libpod-f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026.scope: Deactivated successfully.
Jan 29 11:49:43 np0005601226 podman[76461]: 2026-01-29 16:49:43.060440922 +0000 UTC m=+0.731095041 container died f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026 (image=quay.io/ceph/ceph:v20, name=bold_ramanujan, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 11:49:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3f447a8e7f163577199a37ec84f67913aac6f63fe392884115e29181af8c51bc-merged.mount: Deactivated successfully.
Jan 29 11:49:43 np0005601226 podman[76461]: 2026-01-29 16:49:43.113297428 +0000 UTC m=+0.783951547 container remove f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026 (image=quay.io/ceph/ceph:v20, name=bold_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:49:43 np0005601226 systemd[1]: libpod-conmon-f1d47d6aeb0e1385e78b37c47428cc34828c41a11ad63f22fefd6867290f0026.scope: Deactivated successfully.
Jan 29 11:49:43 np0005601226 podman[76518]: 2026-01-29 16:49:43.177625368 +0000 UTC m=+0.051526865 container create 31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8 (image=quay.io/ceph/ceph:v20, name=sad_banzai, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 11:49:43 np0005601226 systemd[1]: Started libpod-conmon-31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8.scope.
Jan 29 11:49:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca8438c5fa2c3e9904f1becc023a5415b8e685891519379d1a9dcf6665e32b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca8438c5fa2c3e9904f1becc023a5415b8e685891519379d1a9dcf6665e32b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca8438c5fa2c3e9904f1becc023a5415b8e685891519379d1a9dcf6665e32b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:43 np0005601226 podman[76518]: 2026-01-29 16:49:43.143366148 +0000 UTC m=+0.017267655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:43 np0005601226 podman[76518]: 2026-01-29 16:49:43.251103873 +0000 UTC m=+0.125005370 container init 31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8 (image=quay.io/ceph/ceph:v20, name=sad_banzai, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 11:49:43 np0005601226 podman[76518]: 2026-01-29 16:49:43.254911551 +0000 UTC m=+0.128813048 container start 31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8 (image=quay.io/ceph/ceph:v20, name=sad_banzai, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 11:49:43 np0005601226 podman[76518]: 2026-01-29 16:49:43.259924716 +0000 UTC m=+0.133826223 container attach 31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8 (image=quay.io/ceph/ceph:v20, name=sad_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:43 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:43 np0005601226 sad_banzai[76535]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClJ/wOIJfxmCEJ6jbBVbTkUDsKGJGye4bseF2Z5+CiIIp7kbwJ3Ta4/2Qwaea9HJXCGKAXYjf+24ojpeN8mi0e6Z6yrSoORk7W14HeC9/+1VM8PEvaSEDs9r0gzN98y5LACPfccJEOvetj+AJvwXsaVbnz9tpPBxL17X83fiMKgA7b7Pyar4V861rky7oSg6GiXdXqj8S8IKAw+xZQemTRNiTJgEIpOgHydqqEjELFWAmevz3LWFqW9p3jWKCRx1GU1puCiwgVh/7tI5mrrCpJbJeaKqNAZRa1jJRVcP5DG0s7Wrb5PDROXYRS77lxe38YMm2CSk4H6IwKcVEwUC9GzqRrLhnmE5TzUVUNpIkxUqoaPfVOhmnGXCwksEsQiJX5wWTOYcUGJWTsUZNR4IcCl/KwWhbJtrswWxvnWw+r3tMIcv8JCRSOXWCi+1UR+aYIFCLkCutFZri2bJ8I+d4mTSSXkKWILM4z7bUkUNDI6C/0AmbrdQyUXnh1T0denB8= zuul@controller
Jan 29 11:49:43 np0005601226 systemd[1]: libpod-31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8.scope: Deactivated successfully.
Jan 29 11:49:43 np0005601226 podman[76518]: 2026-01-29 16:49:43.640670681 +0000 UTC m=+0.514572248 container died 31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8 (image=quay.io/ceph/ceph:v20, name=sad_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 11:49:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2ca8438c5fa2c3e9904f1becc023a5415b8e685891519379d1a9dcf6665e32b9-merged.mount: Deactivated successfully.
Jan 29 11:49:43 np0005601226 podman[76518]: 2026-01-29 16:49:43.680838335 +0000 UTC m=+0.554739842 container remove 31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8 (image=quay.io/ceph/ceph:v20, name=sad_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:43 np0005601226 systemd[1]: libpod-conmon-31b835e94b04e2187607344329220403791c09b1376a0901ab627175867d8ba8.scope: Deactivated successfully.
Jan 29 11:49:43 np0005601226 podman[76575]: 2026-01-29 16:49:43.7456134 +0000 UTC m=+0.046000285 container create f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03 (image=quay.io/ceph/ceph:v20, name=affectionate_gould, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:49:43 np0005601226 systemd[1]: Started libpod-conmon-f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03.scope.
Jan 29 11:49:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660b21afe8b17359c1b665908b63b2c247ae2a2d9c2b3b77e011a946be1b83fc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660b21afe8b17359c1b665908b63b2c247ae2a2d9c2b3b77e011a946be1b83fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660b21afe8b17359c1b665908b63b2c247ae2a2d9c2b3b77e011a946be1b83fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:43 np0005601226 podman[76575]: 2026-01-29 16:49:43.808692552 +0000 UTC m=+0.109079477 container init f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03 (image=quay.io/ceph/ceph:v20, name=affectionate_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:49:43 np0005601226 podman[76575]: 2026-01-29 16:49:43.814006187 +0000 UTC m=+0.114393082 container start f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03 (image=quay.io/ceph/ceph:v20, name=affectionate_gould, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:43 np0005601226 podman[76575]: 2026-01-29 16:49:43.72333755 +0000 UTC m=+0.023724455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:43 np0005601226 podman[76575]: 2026-01-29 16:49:43.820476627 +0000 UTC m=+0.120863532 container attach f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03 (image=quay.io/ceph/ceph:v20, name=affectionate_gould, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 11:49:43 np0005601226 ceph-mon[75233]: Set ssh ssh_identity_key
Jan 29 11:49:43 np0005601226 ceph-mon[75233]: Set ssh private key
Jan 29 11:49:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:43 np0005601226 ceph-mon[75233]: Set ssh ssh_identity_pub
Jan 29 11:49:44 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:44 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:44 np0005601226 systemd[1]: Created slice User Slice of UID 42477.
Jan 29 11:49:44 np0005601226 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 29 11:49:44 np0005601226 systemd-logind[823]: New session 21 of user ceph-admin.
Jan 29 11:49:44 np0005601226 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 29 11:49:44 np0005601226 systemd[1]: Starting User Manager for UID 42477...
Jan 29 11:49:44 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:44 np0005601226 systemd[76621]: Queued start job for default target Main User Target.
Jan 29 11:49:44 np0005601226 systemd[76621]: Created slice User Application Slice.
Jan 29 11:49:44 np0005601226 systemd[76621]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 29 11:49:44 np0005601226 systemd[76621]: Started Daily Cleanup of User's Temporary Directories.
Jan 29 11:49:44 np0005601226 systemd[76621]: Reached target Paths.
Jan 29 11:49:44 np0005601226 systemd[76621]: Reached target Timers.
Jan 29 11:49:44 np0005601226 systemd[76621]: Starting D-Bus User Message Bus Socket...
Jan 29 11:49:44 np0005601226 systemd[76621]: Starting Create User's Volatile Files and Directories...
Jan 29 11:49:44 np0005601226 systemd[76621]: Finished Create User's Volatile Files and Directories.
Jan 29 11:49:44 np0005601226 systemd[76621]: Listening on D-Bus User Message Bus Socket.
Jan 29 11:49:44 np0005601226 systemd[76621]: Reached target Sockets.
Jan 29 11:49:44 np0005601226 systemd[76621]: Reached target Basic System.
Jan 29 11:49:44 np0005601226 systemd[76621]: Reached target Main User Target.
Jan 29 11:49:44 np0005601226 systemd[76621]: Startup finished in 116ms.
Jan 29 11:49:44 np0005601226 systemd[1]: Started User Manager for UID 42477.
Jan 29 11:49:44 np0005601226 systemd[1]: Started Session 21 of User ceph-admin.
Jan 29 11:49:44 np0005601226 systemd-logind[823]: New session 23 of user ceph-admin.
Jan 29 11:49:44 np0005601226 systemd[1]: Started Session 23 of User ceph-admin.
Jan 29 11:49:44 np0005601226 systemd-logind[823]: New session 24 of user ceph-admin.
Jan 29 11:49:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052677 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:49:44 np0005601226 systemd[1]: Started Session 24 of User ceph-admin.
Jan 29 11:49:45 np0005601226 systemd-logind[823]: New session 25 of user ceph-admin.
Jan 29 11:49:45 np0005601226 systemd[1]: Started Session 25 of User ceph-admin.
Jan 29 11:49:45 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 29 11:49:45 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 29 11:49:45 np0005601226 systemd-logind[823]: New session 26 of user ceph-admin.
Jan 29 11:49:45 np0005601226 systemd[1]: Started Session 26 of User ceph-admin.
Jan 29 11:49:45 np0005601226 systemd-logind[823]: New session 27 of user ceph-admin.
Jan 29 11:49:45 np0005601226 systemd[1]: Started Session 27 of User ceph-admin.
Jan 29 11:49:46 np0005601226 systemd-logind[823]: New session 28 of user ceph-admin.
Jan 29 11:49:46 np0005601226 systemd[1]: Started Session 28 of User ceph-admin.
Jan 29 11:49:46 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:46 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:46 np0005601226 systemd-logind[823]: New session 29 of user ceph-admin.
Jan 29 11:49:46 np0005601226 systemd[1]: Started Session 29 of User ceph-admin.
Jan 29 11:49:46 np0005601226 ceph-mon[75233]: Deploying cephadm binary to compute-0
Jan 29 11:49:47 np0005601226 systemd-logind[823]: New session 30 of user ceph-admin.
Jan 29 11:49:47 np0005601226 systemd[1]: Started Session 30 of User ceph-admin.
Jan 29 11:49:47 np0005601226 systemd-logind[823]: New session 31 of user ceph-admin.
Jan 29 11:49:47 np0005601226 systemd[1]: Started Session 31 of User ceph-admin.
Jan 29 11:49:48 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:48 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:48 np0005601226 systemd-logind[823]: New session 32 of user ceph-admin.
Jan 29 11:49:48 np0005601226 systemd[1]: Started Session 32 of User ceph-admin.
Jan 29 11:49:49 np0005601226 systemd-logind[823]: New session 33 of user ceph-admin.
Jan 29 11:49:49 np0005601226 systemd[1]: Started Session 33 of User ceph-admin.
Jan 29 11:49:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 11:49:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:49 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Added host compute-0
Jan 29 11:49:49 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 29 11:49:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 11:49:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 11:49:49 np0005601226 affectionate_gould[76591]: Added host 'compute-0' with addr '192.168.122.100'
Jan 29 11:49:49 np0005601226 systemd[1]: libpod-f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03.scope: Deactivated successfully.
Jan 29 11:49:49 np0005601226 podman[76575]: 2026-01-29 16:49:49.524526084 +0000 UTC m=+5.824912969 container died f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03 (image=quay.io/ceph/ceph:v20, name=affectionate_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-660b21afe8b17359c1b665908b63b2c247ae2a2d9c2b3b77e011a946be1b83fc-merged.mount: Deactivated successfully.
Jan 29 11:49:49 np0005601226 podman[76575]: 2026-01-29 16:49:49.718715545 +0000 UTC m=+6.019102430 container remove f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03 (image=quay.io/ceph/ceph:v20, name=affectionate_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:49:49 np0005601226 systemd[1]: libpod-conmon-f59fc1e20902a85379affadb5e4da41145253c6cb66b741ab82bad2e51e48f03.scope: Deactivated successfully.
Jan 29 11:49:49 np0005601226 podman[77036]: 2026-01-29 16:49:49.775467712 +0000 UTC m=+0.039224595 container create 4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d (image=quay.io/ceph/ceph:v20, name=recursing_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 11:49:49 np0005601226 systemd[1]: Started libpod-conmon-4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d.scope.
Jan 29 11:49:49 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a029a05471381855301b39cda2d6e0670cd44ee498620a0bd89ab794970168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a029a05471381855301b39cda2d6e0670cd44ee498620a0bd89ab794970168/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a029a05471381855301b39cda2d6e0670cd44ee498620a0bd89ab794970168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:49 np0005601226 podman[77036]: 2026-01-29 16:49:49.75343174 +0000 UTC m=+0.017188643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:49 np0005601226 podman[77036]: 2026-01-29 16:49:49.868639685 +0000 UTC m=+0.132396618 container init 4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d (image=quay.io/ceph/ceph:v20, name=recursing_lewin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:49:49 np0005601226 podman[77036]: 2026-01-29 16:49:49.874520927 +0000 UTC m=+0.138277850 container start 4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d (image=quay.io/ceph/ceph:v20, name=recursing_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 11:49:49 np0005601226 podman[77036]: 2026-01-29 16:49:49.886576571 +0000 UTC m=+0.150333454 container attach 4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d (image=quay.io/ceph/ceph:v20, name=recursing_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 11:49:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054703 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:49:50 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:50 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:50 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 29 11:49:50 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 29 11:49:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 11:49:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:50 np0005601226 recursing_lewin[77070]: Scheduled mon update...
Jan 29 11:49:50 np0005601226 systemd[1]: libpod-4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d.scope: Deactivated successfully.
Jan 29 11:49:50 np0005601226 podman[77036]: 2026-01-29 16:49:50.367007651 +0000 UTC m=+0.630764544 container died 4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d (image=quay.io/ceph/ceph:v20, name=recursing_lewin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f4a029a05471381855301b39cda2d6e0670cd44ee498620a0bd89ab794970168-merged.mount: Deactivated successfully.
Jan 29 11:49:50 np0005601226 podman[77063]: 2026-01-29 16:49:50.46547424 +0000 UTC m=+0.652420035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:50 np0005601226 podman[77036]: 2026-01-29 16:49:50.488186973 +0000 UTC m=+0.751943856 container remove 4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d (image=quay.io/ceph/ceph:v20, name=recursing_lewin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:50 np0005601226 systemd[1]: libpod-conmon-4f0b194c8424393ce63fcc4a16b44c89aff4496955b0470c10647e83d4c9150d.scope: Deactivated successfully.
Jan 29 11:49:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:50 np0005601226 ceph-mon[75233]: Added host compute-0
Jan 29 11:49:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:50 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:50 np0005601226 podman[77127]: 2026-01-29 16:49:50.568178589 +0000 UTC m=+0.054301592 container create 56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a (image=quay.io/ceph/ceph:v20, name=wonderful_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:50 np0005601226 podman[77129]: 2026-01-29 16:49:50.581618394 +0000 UTC m=+0.063839416 container create e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944 (image=quay.io/ceph/ceph:v20, name=kind_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 11:49:50 np0005601226 systemd[1]: Started libpod-conmon-e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944.scope.
Jan 29 11:49:50 np0005601226 systemd[1]: Started libpod-conmon-56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a.scope.
Jan 29 11:49:50 np0005601226 podman[77127]: 2026-01-29 16:49:50.540182932 +0000 UTC m=+0.026305955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:50 np0005601226 podman[77129]: 2026-01-29 16:49:50.544730783 +0000 UTC m=+0.026951885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:50 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:50 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ff1f7fe927d280f4150a763d627dd66dde8e8facdceebca9baa4219566d980/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ff1f7fe927d280f4150a763d627dd66dde8e8facdceebca9baa4219566d980/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ff1f7fe927d280f4150a763d627dd66dde8e8facdceebca9baa4219566d980/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:50 np0005601226 podman[77127]: 2026-01-29 16:49:50.674362875 +0000 UTC m=+0.160485898 container init 56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a (image=quay.io/ceph/ceph:v20, name=wonderful_poincare, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 11:49:50 np0005601226 podman[77127]: 2026-01-29 16:49:50.680091993 +0000 UTC m=+0.166214996 container start 56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a (image=quay.io/ceph/ceph:v20, name=wonderful_poincare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 11:49:50 np0005601226 podman[77129]: 2026-01-29 16:49:50.68453397 +0000 UTC m=+0.166755012 container init e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944 (image=quay.io/ceph/ceph:v20, name=kind_jennings, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 11:49:50 np0005601226 podman[77129]: 2026-01-29 16:49:50.687904824 +0000 UTC m=+0.170125836 container start e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944 (image=quay.io/ceph/ceph:v20, name=kind_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:50 np0005601226 podman[77129]: 2026-01-29 16:49:50.700692871 +0000 UTC m=+0.182913893 container attach e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944 (image=quay.io/ceph/ceph:v20, name=kind_jennings, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:50 np0005601226 podman[77127]: 2026-01-29 16:49:50.719020948 +0000 UTC m=+0.205144061 container attach 56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a (image=quay.io/ceph/ceph:v20, name=wonderful_poincare, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:50 np0005601226 kind_jennings[77159]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 29 11:49:50 np0005601226 systemd[1]: libpod-e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944.scope: Deactivated successfully.
Jan 29 11:49:50 np0005601226 podman[77129]: 2026-01-29 16:49:50.786241258 +0000 UTC m=+0.268462290 container died e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944 (image=quay.io/ceph/ceph:v20, name=kind_jennings, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:49:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0f0bb0775f0220957868462326214f0846f2192d58f35edfda94450c5a43ce11-merged.mount: Deactivated successfully.
Jan 29 11:49:50 np0005601226 podman[77129]: 2026-01-29 16:49:50.881078233 +0000 UTC m=+0.363299245 container remove e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944 (image=quay.io/ceph/ceph:v20, name=kind_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 11:49:50 np0005601226 systemd[1]: libpod-conmon-e2aeded34be8f39350638d1988db95a8f176b41023975dfeb30602ba311b1944.scope: Deactivated successfully.
Jan 29 11:49:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 29 11:49:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:51 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:51 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 29 11:49:51 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 29 11:49:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 11:49:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:51 np0005601226 wonderful_poincare[77161]: Scheduled mgr update...
Jan 29 11:49:51 np0005601226 systemd[1]: libpod-56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a.scope: Deactivated successfully.
Jan 29 11:49:51 np0005601226 conmon[77161]: conmon 56403195f25b2f8a861a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a.scope/container/memory.events
Jan 29 11:49:51 np0005601226 podman[77127]: 2026-01-29 16:49:51.162723031 +0000 UTC m=+0.648846024 container died 56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a (image=quay.io/ceph/ceph:v20, name=wonderful_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 11:49:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b8ff1f7fe927d280f4150a763d627dd66dde8e8facdceebca9baa4219566d980-merged.mount: Deactivated successfully.
Jan 29 11:49:51 np0005601226 podman[77127]: 2026-01-29 16:49:51.423148402 +0000 UTC m=+0.909271395 container remove 56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a (image=quay.io/ceph/ceph:v20, name=wonderful_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 11:49:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:49:51 np0005601226 podman[77280]: 2026-01-29 16:49:51.552777245 +0000 UTC m=+0.112940968 container create cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a (image=quay.io/ceph/ceph:v20, name=goofy_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 11:49:51 np0005601226 podman[77280]: 2026-01-29 16:49:51.465510864 +0000 UTC m=+0.025674587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:51 np0005601226 systemd[1]: Started libpod-conmon-cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a.scope.
Jan 29 11:49:51 np0005601226 systemd[1]: libpod-conmon-56403195f25b2f8a861ae1ebe392a064b9f59890e402b78459b0bf931d38445a.scope: Deactivated successfully.
Jan 29 11:49:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20319db92d64ab5413c86c05dd00ffb0e4f16328a470249a71a273c34743fee5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20319db92d64ab5413c86c05dd00ffb0e4f16328a470249a71a273c34743fee5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20319db92d64ab5413c86c05dd00ffb0e4f16328a470249a71a273c34743fee5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:51 np0005601226 podman[77280]: 2026-01-29 16:49:51.739230146 +0000 UTC m=+0.299393889 container init cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a (image=quay.io/ceph/ceph:v20, name=goofy_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:49:51 np0005601226 podman[77280]: 2026-01-29 16:49:51.747547223 +0000 UTC m=+0.307710936 container start cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a (image=quay.io/ceph/ceph:v20, name=goofy_maxwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 11:49:51 np0005601226 podman[77280]: 2026-01-29 16:49:51.763462526 +0000 UTC m=+0.323626289 container attach cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a (image=quay.io/ceph/ceph:v20, name=goofy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: Saving service mon spec with placement count:5
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:52 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:52 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service crash spec with placement *
Jan 29 11:49:52 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 29 11:49:52 np0005601226 podman[77419]: 2026-01-29 16:49:52.208529982 +0000 UTC m=+0.092087361 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:52 np0005601226 goofy_maxwell[77324]: Scheduled crash update...
Jan 29 11:49:52 np0005601226 systemd[1]: libpod-cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a.scope: Deactivated successfully.
Jan 29 11:49:52 np0005601226 podman[77280]: 2026-01-29 16:49:52.23269589 +0000 UTC m=+0.792859603 container died cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a (image=quay.io/ceph/ceph:v20, name=goofy_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:49:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-20319db92d64ab5413c86c05dd00ffb0e4f16328a470249a71a273c34743fee5-merged.mount: Deactivated successfully.
Jan 29 11:49:52 np0005601226 podman[77280]: 2026-01-29 16:49:52.292725029 +0000 UTC m=+0.852888742 container remove cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a (image=quay.io/ceph/ceph:v20, name=goofy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 11:49:52 np0005601226 systemd[1]: libpod-conmon-cfefcc92093691b87063c831eadc72af1e8fe59c57387b321192824f6d62ec9a.scope: Deactivated successfully.
Jan 29 11:49:52 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:52 np0005601226 podman[77419]: 2026-01-29 16:49:52.335683438 +0000 UTC m=+0.219240817 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:52 np0005601226 podman[77454]: 2026-01-29 16:49:52.373191619 +0000 UTC m=+0.057476000 container create 2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2 (image=quay.io/ceph/ceph:v20, name=bold_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:52 np0005601226 systemd[1]: Started libpod-conmon-2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2.scope.
Jan 29 11:49:52 np0005601226 podman[77454]: 2026-01-29 16:49:52.339217238 +0000 UTC m=+0.023501669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:52 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7259d88eaa8dda2a89737c6cef6055a153b7d59b14a264cec2007094563079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7259d88eaa8dda2a89737c6cef6055a153b7d59b14a264cec2007094563079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca7259d88eaa8dda2a89737c6cef6055a153b7d59b14a264cec2007094563079/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:52 np0005601226 podman[77454]: 2026-01-29 16:49:52.462027418 +0000 UTC m=+0.146311819 container init 2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2 (image=quay.io/ceph/ceph:v20, name=bold_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:52 np0005601226 podman[77454]: 2026-01-29 16:49:52.470400387 +0000 UTC m=+0.154684788 container start 2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2 (image=quay.io/ceph/ceph:v20, name=bold_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 11:49:52 np0005601226 podman[77454]: 2026-01-29 16:49:52.473549576 +0000 UTC m=+0.157833967 container attach 2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2 (image=quay.io/ceph/ceph:v20, name=bold_elbakyan, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 11:49:52 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 29 11:49:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1703427650' entity='client.admin' 
Jan 29 11:49:52 np0005601226 systemd[1]: libpod-2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2.scope: Deactivated successfully.
Jan 29 11:49:52 np0005601226 podman[77601]: 2026-01-29 16:49:52.949936281 +0000 UTC m=+0.025890523 container died 2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2 (image=quay.io/ceph/ceph:v20, name=bold_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:52 np0005601226 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77628 (sysctl)
Jan 29 11:49:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ca7259d88eaa8dda2a89737c6cef6055a153b7d59b14a264cec2007094563079-merged.mount: Deactivated successfully.
Jan 29 11:49:52 np0005601226 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 29 11:49:52 np0005601226 podman[77601]: 2026-01-29 16:49:52.99641893 +0000 UTC m=+0.072373162 container remove 2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2 (image=quay.io/ceph/ceph:v20, name=bold_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 29 11:49:53 np0005601226 systemd[1]: libpod-conmon-2a7333f4d9e7b8122b72edd06e3aa12b9398db2c866f4559a1cdd736d4e0ced2.scope: Deactivated successfully.
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: Saving service mgr spec with placement count:2
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1703427650' entity='client.admin' 
Jan 29 11:49:53 np0005601226 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 29 11:49:53 np0005601226 podman[77632]: 2026-01-29 16:49:53.067447918 +0000 UTC m=+0.049493042 container create 175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325 (image=quay.io/ceph/ceph:v20, name=distracted_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 11:49:53 np0005601226 systemd[1]: Started libpod-conmon-175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325.scope.
Jan 29 11:49:53 np0005601226 podman[77632]: 2026-01-29 16:49:53.041314119 +0000 UTC m=+0.023359303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394bc49307e3aaf7f180ba5bf2a75cedd00fa830170a3872f137267c187f8678/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394bc49307e3aaf7f180ba5bf2a75cedd00fa830170a3872f137267c187f8678/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/394bc49307e3aaf7f180ba5bf2a75cedd00fa830170a3872f137267c187f8678/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:53 np0005601226 podman[77632]: 2026-01-29 16:49:53.178457624 +0000 UTC m=+0.160502748 container init 175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325 (image=quay.io/ceph/ceph:v20, name=distracted_margulis, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 11:49:53 np0005601226 podman[77632]: 2026-01-29 16:49:53.189121245 +0000 UTC m=+0.171166379 container start 175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325 (image=quay.io/ceph/ceph:v20, name=distracted_margulis, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:53 np0005601226 podman[77632]: 2026-01-29 16:49:53.193544151 +0000 UTC m=+0.175589265 container attach 175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325 (image=quay.io/ceph/ceph:v20, name=distracted_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:53 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:53 np0005601226 systemd[1]: libpod-175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325.scope: Deactivated successfully.
Jan 29 11:49:53 np0005601226 conmon[77654]: conmon 175b5201fd5726e7ac06 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325.scope/container/memory.events
Jan 29 11:49:53 np0005601226 podman[77632]: 2026-01-29 16:49:53.604702048 +0000 UTC m=+0.586747162 container died 175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325 (image=quay.io/ceph/ceph:v20, name=distracted_margulis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:49:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay-394bc49307e3aaf7f180ba5bf2a75cedd00fa830170a3872f137267c187f8678-merged.mount: Deactivated successfully.
Jan 29 11:49:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:53 np0005601226 podman[77632]: 2026-01-29 16:49:53.663591471 +0000 UTC m=+0.645636585 container remove 175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325 (image=quay.io/ceph/ceph:v20, name=distracted_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:53 np0005601226 systemd[1]: libpod-conmon-175b5201fd5726e7ac06e8876ead957de6d05455f3d610754408d242142dc325.scope: Deactivated successfully.
Jan 29 11:49:53 np0005601226 podman[77791]: 2026-01-29 16:49:53.711290927 +0000 UTC m=+0.034801058 container create d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84 (image=quay.io/ceph/ceph:v20, name=nostalgic_golick, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:49:53 np0005601226 systemd[1]: Started libpod-conmon-d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84.scope.
Jan 29 11:49:53 np0005601226 podman[77791]: 2026-01-29 16:49:53.694182197 +0000 UTC m=+0.017692348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a54b84b5c9932d4de6f6be99eae292fc43216a4eb94e5fc8f0d33d4d18eb8f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a54b84b5c9932d4de6f6be99eae292fc43216a4eb94e5fc8f0d33d4d18eb8f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a54b84b5c9932d4de6f6be99eae292fc43216a4eb94e5fc8f0d33d4d18eb8f5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:53 np0005601226 podman[77791]: 2026-01-29 16:49:53.811459777 +0000 UTC m=+0.134969938 container init d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84 (image=quay.io/ceph/ceph:v20, name=nostalgic_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:49:53 np0005601226 podman[77791]: 2026-01-29 16:49:53.816183634 +0000 UTC m=+0.139693765 container start d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84 (image=quay.io/ceph/ceph:v20, name=nostalgic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:49:53 np0005601226 podman[77791]: 2026-01-29 16:49:53.822252712 +0000 UTC m=+0.145762933 container attach d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84 (image=quay.io/ceph/ceph:v20, name=nostalgic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: Saving service crash spec with placement *
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:54 np0005601226 podman[77874]: 2026-01-29 16:49:54.020587811 +0000 UTC m=+0.033739155 container create baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 11:49:54 np0005601226 systemd[1]: Started libpod-conmon-baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd.scope.
Jan 29 11:49:54 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:54 np0005601226 podman[77874]: 2026-01-29 16:49:54.084578151 +0000 UTC m=+0.097729495 container init baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:54 np0005601226 podman[77874]: 2026-01-29 16:49:54.089757442 +0000 UTC m=+0.102908796 container start baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 29 11:49:54 np0005601226 elegant_morse[77890]: 167 167
Jan 29 11:49:54 np0005601226 systemd[1]: libpod-baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd.scope: Deactivated successfully.
Jan 29 11:49:54 np0005601226 podman[77874]: 2026-01-29 16:49:54.095392216 +0000 UTC m=+0.108543560 container attach baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:54 np0005601226 podman[77874]: 2026-01-29 16:49:54.095702256 +0000 UTC m=+0.108853590 container died baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:49:54 np0005601226 podman[77874]: 2026-01-29 16:49:54.004239045 +0000 UTC m=+0.017390409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:49:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-48fcb30f0ce311972005fa84758fba1e9141bf489f4011b4f06fbf26ffa0d26d-merged.mount: Deactivated successfully.
Jan 29 11:49:54 np0005601226 podman[77874]: 2026-01-29 16:49:54.149407409 +0000 UTC m=+0.162558753 container remove baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:49:54 np0005601226 systemd[1]: libpod-conmon-baef76b6e1bf8e9434c9dcaed824f6b6eb0c883d4915795e2b7d00162f8d70dd.scope: Deactivated successfully.
Jan 29 11:49:54 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:54 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Added label _admin to host compute-0
Jan 29 11:49:54 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 29 11:49:54 np0005601226 nostalgic_golick[77837]: Added label _admin to host compute-0
Jan 29 11:49:54 np0005601226 systemd[1]: libpod-d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84.scope: Deactivated successfully.
Jan 29 11:49:54 np0005601226 podman[77791]: 2026-01-29 16:49:54.21537472 +0000 UTC m=+0.538884851 container died d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84 (image=quay.io/ceph/ceph:v20, name=nostalgic_golick, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9a54b84b5c9932d4de6f6be99eae292fc43216a4eb94e5fc8f0d33d4d18eb8f5-merged.mount: Deactivated successfully.
Jan 29 11:49:54 np0005601226 podman[77791]: 2026-01-29 16:49:54.255439 +0000 UTC m=+0.578949131 container remove d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84 (image=quay.io/ceph/ceph:v20, name=nostalgic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:49:54 np0005601226 systemd[1]: libpod-conmon-d16ee93abcf4579063e0b828165a58c26fd338d63ade5c1e3a0e215fdcf83d84.scope: Deactivated successfully.
Jan 29 11:49:54 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:54 np0005601226 podman[77923]: 2026-01-29 16:49:54.329028868 +0000 UTC m=+0.055294443 container create 51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd (image=quay.io/ceph/ceph:v20, name=busy_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 11:49:54 np0005601226 systemd[1]: Started libpod-conmon-51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd.scope.
Jan 29 11:49:54 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e0b1f018a84a10226e36c6f34a0bbcde598bca4b86bc4079decc0542c2f5cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e0b1f018a84a10226e36c6f34a0bbcde598bca4b86bc4079decc0542c2f5cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e0b1f018a84a10226e36c6f34a0bbcde598bca4b86bc4079decc0542c2f5cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:54 np0005601226 podman[77923]: 2026-01-29 16:49:54.390487891 +0000 UTC m=+0.116753466 container init 51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd (image=quay.io/ceph/ceph:v20, name=busy_galileo, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:54 np0005601226 podman[77923]: 2026-01-29 16:49:54.394022809 +0000 UTC m=+0.120288374 container start 51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd (image=quay.io/ceph/ceph:v20, name=busy_galileo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:54 np0005601226 podman[77923]: 2026-01-29 16:49:54.301859617 +0000 UTC m=+0.028125292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:54 np0005601226 podman[77923]: 2026-01-29 16:49:54.397418985 +0000 UTC m=+0.123684560 container attach 51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd (image=quay.io/ceph/ceph:v20, name=busy_galileo, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 11:49:54 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/50411072' entity='client.admin' 
Jan 29 11:49:54 np0005601226 busy_galileo[77940]: set mgr/dashboard/cluster/status
Jan 29 11:49:54 np0005601226 systemd[1]: libpod-51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd.scope: Deactivated successfully.
Jan 29 11:49:54 np0005601226 podman[77923]: 2026-01-29 16:49:54.938788372 +0000 UTC m=+0.665053977 container died 51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd (image=quay.io/ceph/ceph:v20, name=busy_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 11:49:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:49:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-31e0b1f018a84a10226e36c6f34a0bbcde598bca4b86bc4079decc0542c2f5cd-merged.mount: Deactivated successfully.
Jan 29 11:49:54 np0005601226 podman[77923]: 2026-01-29 16:49:54.985483017 +0000 UTC m=+0.711748632 container remove 51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd (image=quay.io/ceph/ceph:v20, name=busy_galileo, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:49:55 np0005601226 systemd[1]: libpod-conmon-51cae77b36f3c5261e7d601ea0be545af814e85db6965d6a4df69dfda4ab5cdd.scope: Deactivated successfully.
Jan 29 11:49:55 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:55 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/50411072' entity='client.admin' 
Jan 29 11:49:55 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:55 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:55 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:55 np0005601226 podman[78025]: 2026-01-29 16:49:55.436659643 +0000 UTC m=+0.041036611 container create e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:55 np0005601226 systemd[1]: Started libpod-conmon-e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0.scope.
Jan 29 11:49:55 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f14b673688afc6aa6a47f9f5bd5c0346838069b10c92ffaa0b86fc78c00377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f14b673688afc6aa6a47f9f5bd5c0346838069b10c92ffaa0b86fc78c00377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f14b673688afc6aa6a47f9f5bd5c0346838069b10c92ffaa0b86fc78c00377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74f14b673688afc6aa6a47f9f5bd5c0346838069b10c92ffaa0b86fc78c00377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:55 np0005601226 podman[78025]: 2026-01-29 16:49:55.419048367 +0000 UTC m=+0.023425355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:49:55 np0005601226 podman[78025]: 2026-01-29 16:49:55.515436911 +0000 UTC m=+0.119813899 container init e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:49:55 np0005601226 podman[78025]: 2026-01-29 16:49:55.523412678 +0000 UTC m=+0.127789666 container start e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_banzai, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 29 11:49:55 np0005601226 podman[78025]: 2026-01-29 16:49:55.526849604 +0000 UTC m=+0.131226592 container attach e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Jan 29 11:49:55 np0005601226 python3[78073]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:49:55 np0005601226 podman[78322]: 2026-01-29 16:49:55.984864181 +0000 UTC m=+0.053158286 container create a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba (image=quay.io/ceph/ceph:v20, name=sweet_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]: [
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:    {
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "available": false,
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "being_replaced": false,
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "ceph_device_lvm": false,
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "lsm_data": {},
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "lvs": [],
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "path": "/dev/sr0",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "rejected_reasons": [
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "Has a FileSystem",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "Insufficient space (<5GB)"
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        ],
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        "sys_api": {
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "actuators": null,
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "device_nodes": [
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:                "sr0"
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            ],
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "devname": "sr0",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "human_readable_size": "482.00 KB",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "id_bus": "ata",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "model": "QEMU DVD-ROM",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "nr_requests": "2",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "parent": "/dev/sr0",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "partitions": {},
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "path": "/dev/sr0",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "removable": "1",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "rev": "2.5+",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "ro": "0",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "rotational": "1",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "sas_address": "",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "sas_device_handle": "",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "scheduler_mode": "mq-deadline",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "sectors": 0,
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "sectorsize": "2048",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "size": 493568.0,
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "support_discard": "2048",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "type": "disk",
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:            "vendor": "QEMU"
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:        }
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]:    }
Jan 29 11:49:56 np0005601226 quizzical_banzai[78041]: ]
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: Added label _admin to host compute-0
Jan 29 11:49:56 np0005601226 systemd[1]: Started libpod-conmon-a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba.scope.
Jan 29 11:49:56 np0005601226 podman[78025]: 2026-01-29 16:49:56.043186815 +0000 UTC m=+0.647563783 container died e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 11:49:56 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:56 np0005601226 systemd[1]: libpod-e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0.scope: Deactivated successfully.
Jan 29 11:49:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f077fee63e523785403216023b05f1f3cf8dbe7acd96c5de30d10c56f0ab5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87f077fee63e523785403216023b05f1f3cf8dbe7acd96c5de30d10c56f0ab5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:56 np0005601226 podman[78322]: 2026-01-29 16:49:55.965066138 +0000 UTC m=+0.033360343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:56 np0005601226 podman[78322]: 2026-01-29 16:49:56.070375237 +0000 UTC m=+0.138669362 container init a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba (image=quay.io/ceph/ceph:v20, name=sweet_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 11:49:56 np0005601226 systemd[1]: var-lib-containers-storage-overlay-74f14b673688afc6aa6a47f9f5bd5c0346838069b10c92ffaa0b86fc78c00377-merged.mount: Deactivated successfully.
Jan 29 11:49:56 np0005601226 podman[78322]: 2026-01-29 16:49:56.079796038 +0000 UTC m=+0.148090133 container start a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba (image=quay.io/ceph/ceph:v20, name=sweet_diffie, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:49:56 np0005601226 podman[78322]: 2026-01-29 16:49:56.083112062 +0000 UTC m=+0.151406187 container attach a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba (image=quay.io/ceph/ceph:v20, name=sweet_diffie, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:56 np0005601226 podman[78025]: 2026-01-29 16:49:56.095097742 +0000 UTC m=+0.699474710 container remove e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:49:56 np0005601226 systemd[1]: libpod-conmon-e37cedd0d392d33b6aa96f37791801a9e5dcc81a855500d970b5239a5aef9ae0.scope: Deactivated successfully.
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:49:56 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 29 11:49:56 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 29 11:49:56 np0005601226 ceph-mgr[75527]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 29 11:49:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3272055169' entity='client.admin' 
Jan 29 11:49:56 np0005601226 systemd[1]: libpod-a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba.scope: Deactivated successfully.
Jan 29 11:49:56 np0005601226 podman[78322]: 2026-01-29 16:49:56.526567096 +0000 UTC m=+0.594861191 container died a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba (image=quay.io/ceph/ceph:v20, name=sweet_diffie, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:56 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:56 np0005601226 systemd[1]: var-lib-containers-storage-overlay-87f077fee63e523785403216023b05f1f3cf8dbe7acd96c5de30d10c56f0ab5d-merged.mount: Deactivated successfully.
Jan 29 11:49:56 np0005601226 podman[78322]: 2026-01-29 16:49:56.562704575 +0000 UTC m=+0.630998680 container remove a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba (image=quay.io/ceph/ceph:v20, name=sweet_diffie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:49:56 np0005601226 systemd[1]: libpod-conmon-a09d86df106693d54853ba942456909542f04f4995b0218902e0102b1d6a4eba.scope: Deactivated successfully.
Jan 29 11:49:56 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/cc5c72e3-31e0-58b9-8731-456117d38f4a/config/ceph.conf
Jan 29 11:49:56 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/cc5c72e3-31e0-58b9-8731-456117d38f4a/config/ceph.conf
Jan 29 11:49:57 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 29 11:49:57 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: Updating compute-0:/etc/ceph/ceph.conf
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3272055169' entity='client.admin' 
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: Updating compute-0:/var/lib/ceph/cc5c72e3-31e0-58b9-8731-456117d38f4a/config/ceph.conf
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 29 11:49:57 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/cc5c72e3-31e0-58b9-8731-456117d38f4a/config/ceph.client.admin.keyring
Jan 29 11:49:57 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/cc5c72e3-31e0-58b9-8731-456117d38f4a/config/ceph.client.admin.keyring
Jan 29 11:49:57 np0005601226 ansible-async_wrapper.py[79626]: Invoked with j886631176802 30 /home/zuul/.ansible/tmp/ansible-tmp-1769705397.0004828-36442-276758607973831/AnsiballZ_command.py _
Jan 29 11:49:57 np0005601226 ansible-async_wrapper.py[79702]: Starting module and watcher
Jan 29 11:49:57 np0005601226 ansible-async_wrapper.py[79702]: Start watching 79703 (30)
Jan 29 11:49:57 np0005601226 ansible-async_wrapper.py[79703]: Start module (79703)
Jan 29 11:49:57 np0005601226 ansible-async_wrapper.py[79626]: Return async_wrapper task started.
Jan 29 11:49:57 np0005601226 python3[79705]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:49:57 np0005601226 podman[79777]: 2026-01-29 16:49:57.717057476 +0000 UTC m=+0.050411751 container create 905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d (image=quay.io/ceph/ceph:v20, name=vibrant_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 11:49:57 np0005601226 systemd[1]: Started libpod-conmon-905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d.scope.
Jan 29 11:49:57 np0005601226 podman[79777]: 2026-01-29 16:49:57.689505723 +0000 UTC m=+0.022859998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:49:57 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07aafcf1ee6540a5f4e66123291d724bce25ff6ae250f9a9bafc411e3c4c6e1c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07aafcf1ee6540a5f4e66123291d724bce25ff6ae250f9a9bafc411e3c4c6e1c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:57 np0005601226 podman[79777]: 2026-01-29 16:49:57.817239157 +0000 UTC m=+0.150593432 container init 905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d (image=quay.io/ceph/ceph:v20, name=vibrant_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:57 np0005601226 podman[79777]: 2026-01-29 16:49:57.824977297 +0000 UTC m=+0.158331612 container start 905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d (image=quay.io/ceph/ceph:v20, name=vibrant_fermi, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 11:49:57 np0005601226 podman[79777]: 2026-01-29 16:49:57.831112796 +0000 UTC m=+0.164467101 container attach 905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d (image=quay.io/ceph/ceph:v20, name=vibrant_fermi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:57 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev e545bf30-f9d2-451e-a7b4-58c60a7761e4 (Updating crash deployment (+1 -> 1))
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:49:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:49:57 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 29 11:49:57 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 29 11:49:58 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 11:49:58 np0005601226 vibrant_fermi[79844]: 
Jan 29 11:49:58 np0005601226 vibrant_fermi[79844]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 11:49:58 np0005601226 systemd[1]: libpod-905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d.scope: Deactivated successfully.
Jan 29 11:49:58 np0005601226 podman[79984]: 2026-01-29 16:49:58.302778566 +0000 UTC m=+0.021366432 container died 905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d (image=quay.io/ceph/ceph:v20, name=vibrant_fermi, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:58 np0005601226 ceph-mgr[75527]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 29 11:49:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 29 11:49:58 np0005601226 systemd[1]: var-lib-containers-storage-overlay-07aafcf1ee6540a5f4e66123291d724bce25ff6ae250f9a9bafc411e3c4c6e1c-merged.mount: Deactivated successfully.
Jan 29 11:49:58 np0005601226 podman[79984]: 2026-01-29 16:49:58.361326828 +0000 UTC m=+0.079914674 container remove 905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d (image=quay.io/ceph/ceph:v20, name=vibrant_fermi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 11:49:58 np0005601226 systemd[1]: libpod-conmon-905e88918102bf3f5464366cfcd4cd2fc9ec0905ad222a1ec777fa3cd8267a7d.scope: Deactivated successfully.
Jan 29 11:49:58 np0005601226 ansible-async_wrapper.py[79703]: Module complete (79703)
Jan 29 11:49:58 np0005601226 podman[80026]: 2026-01-29 16:49:58.41597563 +0000 UTC m=+0.037827192 container create b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:49:58 np0005601226 systemd[1]: Started libpod-conmon-b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0.scope.
Jan 29 11:49:58 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:49:58 np0005601226 podman[80026]: 2026-01-29 16:49:58.474568354 +0000 UTC m=+0.096419916 container init b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:49:58 np0005601226 podman[80026]: 2026-01-29 16:49:58.478255597 +0000 UTC m=+0.100107159 container start b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Jan 29 11:49:58 np0005601226 jolly_sanderson[80042]: 167 167
Jan 29 11:49:58 np0005601226 systemd[1]: libpod-b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0.scope: Deactivated successfully.
Jan 29 11:49:58 np0005601226 podman[80026]: 2026-01-29 16:49:58.482326793 +0000 UTC m=+0.104178355 container attach b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:49:58 np0005601226 podman[80026]: 2026-01-29 16:49:58.482762388 +0000 UTC m=+0.104613950 container died b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 29 11:49:58 np0005601226 podman[80026]: 2026-01-29 16:49:58.393726181 +0000 UTC m=+0.015577773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:49:58 np0005601226 systemd[1]: var-lib-containers-storage-overlay-84b53cddd0f5a0fc84768a43ad70991ca7f307de0c828cd3fb52bf751af015f1-merged.mount: Deactivated successfully.
Jan 29 11:49:58 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:49:58 np0005601226 podman[80026]: 2026-01-29 16:49:58.549655857 +0000 UTC m=+0.171507419 container remove b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_sanderson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 11:49:58 np0005601226 systemd[1]: libpod-conmon-b6e6d66f16195f9b45a4ccf31b7080be514143bfb4227b21e2bcc76de2cbb8b0.scope: Deactivated successfully.
Jan 29 11:49:58 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:58 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:58 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:58 np0005601226 systemd[1]: Reloading.
Jan 29 11:49:58 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: Updating compute-0:/var/lib/ceph/cc5c72e3-31e0-58b9-8731-456117d38f4a/config/ceph.client.admin.keyring
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: Deploying daemon crash.compute-0 on compute-0
Jan 29 11:49:58 np0005601226 ceph-mon[75233]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 29 11:49:58 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:49:58 np0005601226 python3[80146]: ansible-ansible.legacy.async_status Invoked with jid=j886631176802.79626 mode=status _async_dir=/root/.ansible_async
Jan 29 11:49:59 np0005601226 systemd[1]: Starting Ceph crash.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:49:59 np0005601226 python3[80233]: ansible-ansible.legacy.async_status Invoked with jid=j886631176802.79626 mode=cleanup _async_dir=/root/.ansible_async
Jan 29 11:49:59 np0005601226 podman[80283]: 2026-01-29 16:49:59.310623662 +0000 UTC m=+0.019357000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:49:59 np0005601226 podman[80283]: 2026-01-29 16:49:59.45208516 +0000 UTC m=+0.160818478 container create 70a89fc4c7fa303e87fc86ea006efdb4ece04f1584a083e0758d67cafbdf97cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:49:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b9c4d73dbf0911039d403aa1b43294de28d38a37f5f7710a52ce636d85d55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b9c4d73dbf0911039d403aa1b43294de28d38a37f5f7710a52ce636d85d55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b9c4d73dbf0911039d403aa1b43294de28d38a37f5f7710a52ce636d85d55/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24b9c4d73dbf0911039d403aa1b43294de28d38a37f5f7710a52ce636d85d55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:49:59 np0005601226 podman[80283]: 2026-01-29 16:49:59.614278151 +0000 UTC m=+0.323011479 container init 70a89fc4c7fa303e87fc86ea006efdb4ece04f1584a083e0758d67cafbdf97cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:49:59 np0005601226 podman[80283]: 2026-01-29 16:49:59.617697566 +0000 UTC m=+0.326430884 container start 70a89fc4c7fa303e87fc86ea006efdb4ece04f1584a083e0758d67cafbdf97cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 29 11:49:59 np0005601226 bash[80283]: 70a89fc4c7fa303e87fc86ea006efdb4ece04f1584a083e0758d67cafbdf97cf
Jan 29 11:49:59 np0005601226 systemd[1]: Started Ceph crash.compute-0 for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: 2026-01-29T16:49:59.769+0000 7fee28e72640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: 2026-01-29T16:49:59.769+0000 7fee28e72640 -1 AuthRegistry(0x7fee24052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: 2026-01-29T16:49:59.771+0000 7fee28e72640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: 2026-01-29T16:49:59.771+0000 7fee28e72640 -1 AuthRegistry(0x7fee28e70fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: 2026-01-29T16:49:59.776+0000 7fee22575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: 2026-01-29T16:49:59.776+0000 7fee28e72640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 29 11:49:59 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-crash-compute-0[80299]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev e545bf30-f9d2-451e-a7b4-58c60a7761e4 (Updating crash deployment (+1 -> 1))
Jan 29 11:49:59 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event e545bf30-f9d2-451e-a7b4-58c60a7761e4 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 11:49:59 np0005601226 python3[80329]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 648e8d1a-9358-4b0b-98e9-f346bf453f17 (Updating mgr deployment (+1 -> 2))
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxbxkv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxbxkv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxbxkv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mgr services"} : dispatch
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:49:59 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.gxbxkv on compute-0
Jan 29 11:49:59 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.gxbxkv on compute-0
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxbxkv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 11:49:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.gxbxkv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 29 11:50:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:00 np0005601226 python3[80435]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:00 np0005601226 podman[80461]: 2026-01-29 16:50:00.329440707 +0000 UTC m=+0.062962850 container create 2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_meitner, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 29 11:50:00 np0005601226 podman[80461]: 2026-01-29 16:50:00.292328929 +0000 UTC m=+0.025851182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:00 np0005601226 systemd[1]: Started libpod-conmon-2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e.scope.
Jan 29 11:50:00 np0005601226 podman[80472]: 2026-01-29 16:50:00.408386601 +0000 UTC m=+0.071447363 container create 82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7 (image=quay.io/ceph/ceph:v20, name=nifty_swartz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:00 np0005601226 podman[80472]: 2026-01-29 16:50:00.374465721 +0000 UTC m=+0.037526503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:00 np0005601226 systemd[1]: Started libpod-conmon-82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7.scope.
Jan 29 11:50:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1654ed48cbeac8f06ea8102d01a6afc17f8d8d0510ca938eb45bca30bdd135/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1654ed48cbeac8f06ea8102d01a6afc17f8d8d0510ca938eb45bca30bdd135/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa1654ed48cbeac8f06ea8102d01a6afc17f8d8d0510ca938eb45bca30bdd135/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:00 np0005601226 podman[80461]: 2026-01-29 16:50:00.523340739 +0000 UTC m=+0.256862902 container init 2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_meitner, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:00 np0005601226 podman[80461]: 2026-01-29 16:50:00.52953835 +0000 UTC m=+0.263060493 container start 2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_meitner, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:50:00 np0005601226 quirky_meitner[80486]: 167 167
Jan 29 11:50:00 np0005601226 systemd[1]: libpod-2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e.scope: Deactivated successfully.
Jan 29 11:50:00 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:00 np0005601226 ceph-mgr[75527]: [progress INFO root] Writing back 1 completed events
Jan 29 11:50:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 11:50:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:00 np0005601226 podman[80472]: 2026-01-29 16:50:00.650166804 +0000 UTC m=+0.313227566 container init 82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7 (image=quay.io/ceph/ceph:v20, name=nifty_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:50:00 np0005601226 podman[80472]: 2026-01-29 16:50:00.655092977 +0000 UTC m=+0.318153759 container start 82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7 (image=quay.io/ceph/ceph:v20, name=nifty_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:00 np0005601226 podman[80472]: 2026-01-29 16:50:00.659133282 +0000 UTC m=+0.322194034 container attach 82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7 (image=quay.io/ceph/ceph:v20, name=nifty_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 11:50:00 np0005601226 podman[80461]: 2026-01-29 16:50:00.66230931 +0000 UTC m=+0.395831453 container attach 2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_meitner, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:00 np0005601226 podman[80461]: 2026-01-29 16:50:00.662618109 +0000 UTC m=+0.396140262 container died 2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_meitner, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-06736ecd0757a541896a98bff808d0ccd9081f6cc6e058c7464ec4c38ad1ac77-merged.mount: Deactivated successfully.
Jan 29 11:50:00 np0005601226 podman[80461]: 2026-01-29 16:50:00.703220176 +0000 UTC m=+0.436742329 container remove 2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_meitner, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:00 np0005601226 systemd[1]: libpod-conmon-2b7112df16ace9e48a9571d4d9eaf67fef67b361d855a04ceefdd815d441ef6e.scope: Deactivated successfully.
Jan 29 11:50:00 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:00 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:00 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:00 np0005601226 ceph-mon[75233]: Deploying daemon mgr.compute-0.gxbxkv on compute-0
Jan 29 11:50:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:00 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:01 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:01 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:01 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 11:50:01 np0005601226 nifty_swartz[80491]: 
Jan 29 11:50:01 np0005601226 nifty_swartz[80491]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 11:50:01 np0005601226 podman[80472]: 2026-01-29 16:50:01.091376701 +0000 UTC m=+0.754437463 container died 82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7 (image=quay.io/ceph/ceph:v20, name=nifty_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:01 np0005601226 systemd[1]: libpod-82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7.scope: Deactivated successfully.
Jan 29 11:50:01 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fa1654ed48cbeac8f06ea8102d01a6afc17f8d8d0510ca938eb45bca30bdd135-merged.mount: Deactivated successfully.
Jan 29 11:50:01 np0005601226 systemd[1]: Starting Ceph mgr.compute-0.gxbxkv for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:50:01 np0005601226 podman[80472]: 2026-01-29 16:50:01.253710806 +0000 UTC m=+0.916771568 container remove 82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7 (image=quay.io/ceph/ceph:v20, name=nifty_swartz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:01 np0005601226 systemd[1]: libpod-conmon-82403bab275c7b5a3b164e5cf6b22283bfe59857cbe737887bdf2a958d2f91a7.scope: Deactivated successfully.
Jan 29 11:50:01 np0005601226 podman[80668]: 2026-01-29 16:50:01.412688267 +0000 UTC m=+0.040420783 container create 8e86ad9a3ccfa3e5115d5827656703c378ef2139c9ebdd5c663f742640c12428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 11:50:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8261e946c53e919ee67157ebe0dfd7744ac73d5905d2fe0ceda944f800f17696/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8261e946c53e919ee67157ebe0dfd7744ac73d5905d2fe0ceda944f800f17696/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8261e946c53e919ee67157ebe0dfd7744ac73d5905d2fe0ceda944f800f17696/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8261e946c53e919ee67157ebe0dfd7744ac73d5905d2fe0ceda944f800f17696/merged/var/lib/ceph/mgr/ceph-compute-0.gxbxkv supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:01 np0005601226 podman[80668]: 2026-01-29 16:50:01.472788557 +0000 UTC m=+0.100521063 container init 8e86ad9a3ccfa3e5115d5827656703c378ef2139c9ebdd5c663f742640c12428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:01 np0005601226 podman[80668]: 2026-01-29 16:50:01.478813303 +0000 UTC m=+0.106545789 container start 8e86ad9a3ccfa3e5115d5827656703c378ef2139c9ebdd5c663f742640c12428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 11:50:01 np0005601226 bash[80668]: 8e86ad9a3ccfa3e5115d5827656703c378ef2139c9ebdd5c663f742640c12428
Jan 29 11:50:01 np0005601226 podman[80668]: 2026-01-29 16:50:01.391040837 +0000 UTC m=+0.018773343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:01 np0005601226 systemd[1]: Started Ceph mgr.compute-0.gxbxkv for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:50:01 np0005601226 ceph-mgr[80687]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:50:01 np0005601226 ceph-mgr[80687]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 29 11:50:01 np0005601226 ceph-mgr[80687]: pidfile_write: ignore empty --pid-file
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 11:50:01 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'alerts'
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:01 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 648e8d1a-9358-4b0b-98e9-f346bf453f17 (Updating mgr deployment (+1 -> 2))
Jan 29 11:50:01 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 648e8d1a-9358-4b0b-98e9-f346bf453f17 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:01 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'balancer'
Jan 29 11:50:01 np0005601226 python3[80746]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:01 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'cephadm'
Jan 29 11:50:01 np0005601226 podman[80809]: 2026-01-29 16:50:01.762049611 +0000 UTC m=+0.038502773 container create 1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61 (image=quay.io/ceph/ceph:v20, name=wonderful_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 11:50:01 np0005601226 systemd[1]: Started libpod-conmon-1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61.scope.
Jan 29 11:50:01 np0005601226 podman[80809]: 2026-01-29 16:50:01.743796376 +0000 UTC m=+0.020249618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d734761c3a0fe03c97bd9ef5fc6aa31aefe135ec13a96cd0301444418a19414/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d734761c3a0fe03c97bd9ef5fc6aa31aefe135ec13a96cd0301444418a19414/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d734761c3a0fe03c97bd9ef5fc6aa31aefe135ec13a96cd0301444418a19414/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:01 np0005601226 podman[80809]: 2026-01-29 16:50:01.871110167 +0000 UTC m=+0.147563379 container init 1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61 (image=quay.io/ceph/ceph:v20, name=wonderful_cannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:50:01 np0005601226 podman[80809]: 2026-01-29 16:50:01.8793382 +0000 UTC m=+0.155791362 container start 1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61 (image=quay.io/ceph/ceph:v20, name=wonderful_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:01 np0005601226 podman[80809]: 2026-01-29 16:50:01.901406684 +0000 UTC m=+0.177859856 container attach 1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61 (image=quay.io/ceph/ceph:v20, name=wonderful_cannon, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:02 np0005601226 podman[80890]: 2026-01-29 16:50:02.140563157 +0000 UTC m=+0.052936960 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 11:50:02 np0005601226 podman[80890]: 2026-01-29 16:50:02.247692082 +0000 UTC m=+0.160065885 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/370192984' entity='client.admin' 
Jan 29 11:50:02 np0005601226 systemd[1]: libpod-1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61.scope: Deactivated successfully.
Jan 29 11:50:02 np0005601226 podman[80809]: 2026-01-29 16:50:02.299873458 +0000 UTC m=+0.576326630 container died 1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61 (image=quay.io/ceph/ceph:v20, name=wonderful_cannon, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 11:50:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6d734761c3a0fe03c97bd9ef5fc6aa31aefe135ec13a96cd0301444418a19414-merged.mount: Deactivated successfully.
Jan 29 11:50:02 np0005601226 podman[80809]: 2026-01-29 16:50:02.343409015 +0000 UTC m=+0.619862187 container remove 1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61 (image=quay.io/ceph/ceph:v20, name=wonderful_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:02 np0005601226 systemd[1]: libpod-conmon-1e4e67029a888f5dbdf68ed0c55c8736b9b5c162d502830693c84d1365aa2d61.scope: Deactivated successfully.
Jan 29 11:50:02 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'crash'
Jan 29 11:50:02 np0005601226 ansible-async_wrapper.py[79702]: Done in kid B.
Jan 29 11:50:02 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:02 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'dashboard'
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:02 np0005601226 python3[81036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:02 np0005601226 podman[81053]: 2026-01-29 16:50:02.78360738 +0000 UTC m=+0.045087416 container create ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 11:50:02 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 29 11:50:02 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:02 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 29 11:50:02 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 29 11:50:02 np0005601226 systemd[1]: Started libpod-conmon-ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c.scope.
Jan 29 11:50:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:02 np0005601226 podman[81053]: 2026-01-29 16:50:02.759555876 +0000 UTC m=+0.021035942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5051a48b782fe5cb3ec1f5c66e608521fba3ebe41b7bea8775fe2e9baa6e59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5051a48b782fe5cb3ec1f5c66e608521fba3ebe41b7bea8775fe2e9baa6e59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca5051a48b782fe5cb3ec1f5c66e608521fba3ebe41b7bea8775fe2e9baa6e59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:02 np0005601226 podman[81053]: 2026-01-29 16:50:02.935363568 +0000 UTC m=+0.196843614 container init ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c (image=quay.io/ceph/ceph:v20, name=adoring_mendel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:02 np0005601226 podman[81053]: 2026-01-29 16:50:02.942219581 +0000 UTC m=+0.203699657 container start ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:02 np0005601226 podman[81053]: 2026-01-29 16:50:02.952607072 +0000 UTC m=+0.214087148 container attach ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 11:50:03 np0005601226 podman[81182]: 2026-01-29 16:50:03.179161274 +0000 UTC m=+0.044634182 container create a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7 (image=quay.io/ceph/ceph:v20, name=clever_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:03 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'devicehealth'
Jan 29 11:50:03 np0005601226 systemd[1]: Started libpod-conmon-a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7.scope.
Jan 29 11:50:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:03 np0005601226 podman[81182]: 2026-01-29 16:50:03.152098616 +0000 UTC m=+0.017571534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:03 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'diskprediction_local'
Jan 29 11:50:03 np0005601226 podman[81182]: 2026-01-29 16:50:03.269631664 +0000 UTC m=+0.135104582 container init a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7 (image=quay.io/ceph/ceph:v20, name=clever_ride, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 29 11:50:03 np0005601226 podman[81182]: 2026-01-29 16:50:03.276545598 +0000 UTC m=+0.142018506 container start a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7 (image=quay.io/ceph/ceph:v20, name=clever_ride, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:50:03 np0005601226 podman[81182]: 2026-01-29 16:50:03.280356236 +0000 UTC m=+0.145829144 container attach a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7 (image=quay.io/ceph/ceph:v20, name=clever_ride, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 11:50:03 np0005601226 clever_ride[81198]: 167 167
Jan 29 11:50:03 np0005601226 systemd[1]: libpod-a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7.scope: Deactivated successfully.
Jan 29 11:50:03 np0005601226 podman[81182]: 2026-01-29 16:50:03.284080702 +0000 UTC m=+0.149553640 container died a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7 (image=quay.io/ceph/ceph:v20, name=clever_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/370192984' entity='client.admin' 
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2947876718' entity='client.admin' 
Jan 29 11:50:03 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0a98ee7cd7ef316d2d12c77164de178316571ecd0998150dfd06763cc5c927b6-merged.mount: Deactivated successfully.
Jan 29 11:50:03 np0005601226 systemd[1]: libpod-ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c.scope: Deactivated successfully.
Jan 29 11:50:03 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv[80683]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 29 11:50:03 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv[80683]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 29 11:50:03 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv[80683]:  from numpy import show_config as show_numpy_config
Jan 29 11:50:03 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'influx'
Jan 29 11:50:03 np0005601226 podman[81182]: 2026-01-29 16:50:03.439615256 +0000 UTC m=+0.305088214 container remove a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7 (image=quay.io/ceph/ceph:v20, name=clever_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:03 np0005601226 podman[81053]: 2026-01-29 16:50:03.440534004 +0000 UTC m=+0.702014080 container died ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 11:50:03 np0005601226 systemd[1]: libpod-conmon-a2aa1998988be2da7db9c4ae8a5a806bbf7f4c5fd4edacb3698a4b648e94aeb7.scope: Deactivated successfully.
Jan 29 11:50:03 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ca5051a48b782fe5cb3ec1f5c66e608521fba3ebe41b7bea8775fe2e9baa6e59-merged.mount: Deactivated successfully.
Jan 29 11:50:03 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'insights'
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:03 np0005601226 podman[81053]: 2026-01-29 16:50:03.558139694 +0000 UTC m=+0.819619780 container remove ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c (image=quay.io/ceph/ceph:v20, name=adoring_mendel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:03 np0005601226 systemd[1]: libpod-conmon-ac699efb9517167d623d07f485552cb7daa5bd38f60bf28c4acfb07d49e7d34c.scope: Deactivated successfully.
Jan 29 11:50:03 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'iostat'
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:03 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.zvopdr (unknown last config time)...
Jan 29 11:50:03 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.zvopdr (unknown last config time)...
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.zvopdr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.zvopdr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mgr services"} : dispatch
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:03 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.zvopdr on compute-0
Jan 29 11:50:03 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.zvopdr on compute-0
Jan 29 11:50:03 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'k8sevents'
Jan 29 11:50:03 np0005601226 python3[81305]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:03 np0005601226 podman[81308]: 2026-01-29 16:50:03.996170113 +0000 UTC m=+0.064126206 container create afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142 (image=quay.io/ceph/ceph:v20, name=hungry_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 11:50:04 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'localpool'
Jan 29 11:50:04 np0005601226 systemd[1]: Started libpod-conmon-afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142.scope.
Jan 29 11:50:04 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5b134284ddd966cece89c5dd3860e4966e562d6ec4d04236cc41634e92d099/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5b134284ddd966cece89c5dd3860e4966e562d6ec4d04236cc41634e92d099/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b5b134284ddd966cece89c5dd3860e4966e562d6ec4d04236cc41634e92d099/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:04 np0005601226 podman[81308]: 2026-01-29 16:50:03.964031518 +0000 UTC m=+0.031987601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:04 np0005601226 podman[81336]: 2026-01-29 16:50:04.077686266 +0000 UTC m=+0.095458996 container create ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119 (image=quay.io/ceph/ceph:v20, name=nervous_wescoff, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:04 np0005601226 podman[81336]: 2026-01-29 16:50:04.003974815 +0000 UTC m=+0.021747585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:04 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'mds_autoscaler'
Jan 29 11:50:04 np0005601226 systemd[1]: Started libpod-conmon-ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119.scope.
Jan 29 11:50:04 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:04 np0005601226 podman[81308]: 2026-01-29 16:50:04.179882979 +0000 UTC m=+0.247839062 container init afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142 (image=quay.io/ceph/ceph:v20, name=hungry_chatterjee, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:04 np0005601226 podman[81308]: 2026-01-29 16:50:04.186897136 +0000 UTC m=+0.254853219 container start afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142 (image=quay.io/ceph/ceph:v20, name=hungry_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 29 11:50:04 np0005601226 podman[81308]: 2026-01-29 16:50:04.197803104 +0000 UTC m=+0.265759207 container attach afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142 (image=quay.io/ceph/ceph:v20, name=hungry_chatterjee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 11:50:04 np0005601226 podman[81336]: 2026-01-29 16:50:04.235987067 +0000 UTC m=+0.253759827 container init ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119 (image=quay.io/ceph/ceph:v20, name=nervous_wescoff, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:04 np0005601226 podman[81336]: 2026-01-29 16:50:04.240478355 +0000 UTC m=+0.258251095 container start ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119 (image=quay.io/ceph/ceph:v20, name=nervous_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 11:50:04 np0005601226 podman[81336]: 2026-01-29 16:50:04.24388197 +0000 UTC m=+0.261654710 container attach ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119 (image=quay.io/ceph/ceph:v20, name=nervous_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:04 np0005601226 nervous_wescoff[81358]: 167 167
Jan 29 11:50:04 np0005601226 systemd[1]: libpod-ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119.scope: Deactivated successfully.
Jan 29 11:50:04 np0005601226 podman[81336]: 2026-01-29 16:50:04.246157451 +0000 UTC m=+0.263930201 container died ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119 (image=quay.io/ceph/ceph:v20, name=nervous_wescoff, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 11:50:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:04 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'mirroring'
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/2947876718' entity='client.admin' 
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: Reconfiguring mgr.compute-0.zvopdr (unknown last config time)...
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.zvopdr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: Reconfiguring daemon mgr.compute-0.zvopdr on compute-0
Jan 29 11:50:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fb94ba5545221dd2bbe2c7c433bb3200193b9e3e12aa47799e155fd4d9fd5f39-merged.mount: Deactivated successfully.
Jan 29 11:50:04 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'nfs'
Jan 29 11:50:04 np0005601226 podman[81336]: 2026-01-29 16:50:04.492311701 +0000 UTC m=+0.510084451 container remove ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119 (image=quay.io/ceph/ceph:v20, name=nervous_wescoff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 11:50:04 np0005601226 systemd[1]: libpod-conmon-ec15b2cb928c814b15415f48a501216d1a8bae69a0fdea3256d456d2ff3c2119.scope: Deactivated successfully.
Jan 29 11:50:04 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3651445197' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 29 11:50:04 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'orchestrator'
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:04 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'osd_perf_query'
Jan 29 11:50:05 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'osd_support'
Jan 29 11:50:05 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'pg_autoscaler'
Jan 29 11:50:05 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'progress'
Jan 29 11:50:05 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'prometheus'
Jan 29 11:50:05 np0005601226 podman[81488]: 2026-01-29 16:50:05.292720105 +0000 UTC m=+0.064005651 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:05 np0005601226 podman[81488]: 2026-01-29 16:50:05.410849732 +0000 UTC m=+0.182135258 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:05 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'rbd_support'
Jan 29 11:50:05 np0005601226 ceph-mgr[75527]: [progress INFO root] Writing back 2 completed events
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 11:50:05 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'rgw'
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3651445197' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 29 11:50:05 np0005601226 hungry_chatterjee[81353]: set require_min_compat_client to mimic
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 29 11:50:05 np0005601226 systemd[1]: libpod-afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142.scope: Deactivated successfully.
Jan 29 11:50:05 np0005601226 podman[81308]: 2026-01-29 16:50:05.764761136 +0000 UTC m=+1.832717199 container died afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142 (image=quay.io/ceph/ceph:v20, name=hungry_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3651445197' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:05 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3b5b134284ddd966cece89c5dd3860e4966e562d6ec4d04236cc41634e92d099-merged.mount: Deactivated successfully.
Jan 29 11:50:05 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'rook'
Jan 29 11:50:06 np0005601226 podman[81308]: 2026-01-29 16:50:06.021115351 +0000 UTC m=+2.089071444 container remove afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142 (image=quay.io/ceph/ceph:v20, name=hungry_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:50:06 np0005601226 systemd[1]: libpod-conmon-afbd6178fa2a0945851634afb78a2cc8242ca42817b789a05a99c2e59708c142.scope: Deactivated successfully.
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:06 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:06 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'selftest'
Jan 29 11:50:06 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'smb'
Jan 29 11:50:06 np0005601226 python3[81666]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:06 np0005601226 podman[81667]: 2026-01-29 16:50:06.662737052 +0000 UTC m=+0.021452256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:06 np0005601226 podman[81667]: 2026-01-29 16:50:06.764275704 +0000 UTC m=+0.122990858 container create d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc (image=quay.io/ceph/ceph:v20, name=nifty_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:50:06 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'snap_schedule'
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3651445197' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:06 np0005601226 systemd[1]: Started libpod-conmon-d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc.scope.
Jan 29 11:50:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c546c517b5270ef9cbd7a66ab91bec9d05a76c8f5926412c2780d9aa16b5cdd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c546c517b5270ef9cbd7a66ab91bec9d05a76c8f5926412c2780d9aa16b5cdd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c546c517b5270ef9cbd7a66ab91bec9d05a76c8f5926412c2780d9aa16b5cdd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:06 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'stats'
Jan 29 11:50:06 np0005601226 podman[81667]: 2026-01-29 16:50:06.982603102 +0000 UTC m=+0.341318346 container init d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc (image=quay.io/ceph/ceph:v20, name=nifty_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:06 np0005601226 podman[81667]: 2026-01-29 16:50:06.993049076 +0000 UTC m=+0.351764260 container start d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc (image=quay.io/ceph/ceph:v20, name=nifty_mendel, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:07 np0005601226 podman[81667]: 2026-01-29 16:50:07.000430044 +0000 UTC m=+0.359145238 container attach d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc (image=quay.io/ceph/ceph:v20, name=nifty_mendel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:07 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'status'
Jan 29 11:50:07 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'telegraf'
Jan 29 11:50:07 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'telemetry'
Jan 29 11:50:07 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'test_orchestrator'
Jan 29 11:50:07 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:50:07 np0005601226 ceph-mgr[80687]: mgr[py] Loading python module 'volumes'
Jan 29 11:50:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : Standby manager daemon compute-0.gxbxkv started
Jan 29 11:50:07 np0005601226 ceph-mgr[80687]: ms_deliver_dispatch: unhandled message 0x560b3e074000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 29 11:50:07 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from mgr.compute-0.gxbxkv 192.168.122.100:0/1046593006; not ready for session (expect reconnect)
Jan 29 11:50:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Added host compute-0
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 nifty_mendel[81683]: Added host 'compute-0' with addr '192.168.122.100'
Jan 29 11:50:08 np0005601226 nifty_mendel[81683]: Scheduled mon update...
Jan 29 11:50:08 np0005601226 nifty_mendel[81683]: Scheduled mgr update...
Jan 29 11:50:08 np0005601226 nifty_mendel[81683]: Scheduled osd.default_drive_group update...
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 04895d56-f51c-4945-8016-b3b7df695cab (Updating mgr deployment (-1 -> 1))
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.gxbxkv from compute-0 -- ports [8765]
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.gxbxkv from compute-0 -- ports [8765]
Jan 29 11:50:08 np0005601226 systemd[1]: libpod-d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc.scope: Deactivated successfully.
Jan 29 11:50:08 np0005601226 podman[81667]: 2026-01-29 16:50:08.284888901 +0000 UTC m=+1.643604055 container died d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc (image=quay.io/ceph/ceph:v20, name=nifty_mendel, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.zvopdr(active, since 30s), standbys: compute-0.gxbxkv
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gxbxkv", "id": "compute-0.gxbxkv"} v 0)
Jan 29 11:50:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mgr metadata", "who": "compute-0.gxbxkv", "id": "compute-0.gxbxkv"} : dispatch
Jan 29 11:50:08 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0c546c517b5270ef9cbd7a66ab91bec9d05a76c8f5926412c2780d9aa16b5cdd-merged.mount: Deactivated successfully.
Jan 29 11:50:08 np0005601226 podman[81667]: 2026-01-29 16:50:08.878105724 +0000 UTC m=+2.236820908 container remove d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc (image=quay.io/ceph/ceph:v20, name=nifty_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 11:50:08 np0005601226 systemd[1]: libpod-conmon-d8ecaf13c54f8f2913c80c87b44544317ce56e0f9758a381baa5477403176ebc.scope: Deactivated successfully.
Jan 29 11:50:08 np0005601226 systemd[1]: Stopping Ceph mgr.compute-0.gxbxkv for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:09 np0005601226 python3[81914]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:09 np0005601226 podman[81907]: 2026-01-29 16:50:09.343245881 +0000 UTC m=+0.174358197 container died 8e86ad9a3ccfa3e5115d5827656703c378ef2139c9ebdd5c663f742640c12428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8261e946c53e919ee67157ebe0dfd7744ac73d5905d2fe0ceda944f800f17696-merged.mount: Deactivated successfully.
Jan 29 11:50:09 np0005601226 podman[81907]: 2026-01-29 16:50:09.72959905 +0000 UTC m=+0.560711376 container remove 8e86ad9a3ccfa3e5115d5827656703c378ef2139c9ebdd5c663f742640c12428 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:50:09 np0005601226 bash[81907]: ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-gxbxkv
Jan 29 11:50:09 np0005601226 systemd[1]: ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mgr.compute-0.gxbxkv.service: Main process exited, code=exited, status=143/n/a
Jan 29 11:50:09 np0005601226 podman[81926]: 2026-01-29 16:50:09.824390804 +0000 UTC m=+0.460141544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:09 np0005601226 podman[81926]: 2026-01-29 16:50:09.970077414 +0000 UTC m=+0.605828094 container create 9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f (image=quay.io/ceph/ceph:v20, name=objective_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:10 np0005601226 systemd[1]: Started libpod-conmon-9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f.scope.
Jan 29 11:50:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d95b5cc22c25af7bcbb6ffd110b2db858cc35c0f4748e79237f4af7b3b3ee46/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d95b5cc22c25af7bcbb6ffd110b2db858cc35c0f4748e79237f4af7b3b3ee46/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d95b5cc22c25af7bcbb6ffd110b2db858cc35c0f4748e79237f4af7b3b3ee46/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:10 np0005601226 podman[81926]: 2026-01-29 16:50:10.159723404 +0000 UTC m=+0.795474074 container init 9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f (image=quay.io/ceph/ceph:v20, name=objective_banzai, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: Added host compute-0
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: Saving service mon spec with placement compute-0
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: Saving service mgr spec with placement compute-0
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: Saving service osd.default_drive_group spec with placement compute-0
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: Removing daemon mgr.compute-0.gxbxkv from compute-0 -- ports [8765]
Jan 29 11:50:10 np0005601226 podman[81926]: 2026-01-29 16:50:10.166222295 +0000 UTC m=+0.801972945 container start 9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f (image=quay.io/ceph/ceph:v20, name=objective_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 11:50:10 np0005601226 podman[81926]: 2026-01-29 16:50:10.177531195 +0000 UTC m=+0.813281845 container attach 9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f (image=quay.io/ceph/ceph:v20, name=objective_banzai, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:10 np0005601226 systemd[1]: ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mgr.compute-0.gxbxkv.service: Failed with result 'exit-code'.
Jan 29 11:50:10 np0005601226 systemd[1]: Stopped Ceph mgr.compute-0.gxbxkv for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:50:10 np0005601226 systemd[1]: ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mgr.compute-0.gxbxkv.service: Consumed 7.295s CPU time, 466.8M memory peak, read 0B from disk, written 161.5K to disk.
Jan 29 11:50:10 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:10 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:10 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.gxbxkv
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.gxbxkv
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.gxbxkv"} v 0)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.gxbxkv"} : dispatch
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.gxbxkv"}]': finished
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729286706' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 11:50:10 np0005601226 objective_banzai[81982]: 
Jan 29 11:50:10 np0005601226 objective_banzai[81982]: {"fsid":"cc5c72e3-31e0-58b9-8731-456117d38f4a","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":55,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-29T16:49:11:643666+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-29T16:49:11.647926+0000","services":{}},"progress_events":{"04895d56-f51c-4945-8016-b3b7df695cab":{"message":"Updating mgr deployment (-1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 29 11:50:10 np0005601226 systemd[1]: libpod-9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f.scope: Deactivated successfully.
Jan 29 11:50:10 np0005601226 podman[81926]: 2026-01-29 16:50:10.751962525 +0000 UTC m=+1.387713215 container died 9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f (image=quay.io/ceph/ceph:v20, name=objective_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 04895d56-f51c-4945-8016-b3b7df695cab (Updating mgr deployment (-1 -> 1))
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 04895d56-f51c-4945-8016-b3b7df695cab (Updating mgr deployment (-1 -> 1)) in 3 seconds
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 29 11:50:10 np0005601226 ceph-mgr[75527]: [progress INFO root] Writing back 3 completed events
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8d95b5cc22c25af7bcbb6ffd110b2db858cc35c0f4748e79237f4af7b3b3ee46-merged.mount: Deactivated successfully.
Jan 29 11:50:11 np0005601226 ceph-mon[75233]: Removing key for mgr.compute-0.gxbxkv
Jan 29 11:50:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.gxbxkv"} : dispatch
Jan 29 11:50:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.gxbxkv"}]': finished
Jan 29 11:50:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:50:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:11 np0005601226 podman[81926]: 2026-01-29 16:50:11.335291471 +0000 UTC m=+1.971042171 container remove 9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f (image=quay.io/ceph/ceph:v20, name=objective_banzai, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 11:50:11 np0005601226 podman[82132]: 2026-01-29 16:50:11.388622012 +0000 UTC m=+0.022784746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:11 np0005601226 podman[82132]: 2026-01-29 16:50:11.534120596 +0000 UTC m=+0.168283350 container create d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 11:50:11 np0005601226 systemd[1]: libpod-conmon-9d13900ce4488fdc9d724a063e98a9201bd67e1d27e3ce88cfd21e343d85f92f.scope: Deactivated successfully.
Jan 29 11:50:11 np0005601226 systemd[1]: Started libpod-conmon-d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18.scope.
Jan 29 11:50:11 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:11 np0005601226 podman[82132]: 2026-01-29 16:50:11.638169606 +0000 UTC m=+0.272332350 container init d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:11 np0005601226 podman[82132]: 2026-01-29 16:50:11.648396483 +0000 UTC m=+0.282559197 container start d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_stonebraker, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 11:50:11 np0005601226 funny_stonebraker[82148]: 167 167
Jan 29 11:50:11 np0005601226 systemd[1]: libpod-d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18.scope: Deactivated successfully.
Jan 29 11:50:11 np0005601226 podman[82132]: 2026-01-29 16:50:11.663769109 +0000 UTC m=+0.297931853 container attach d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:11 np0005601226 podman[82132]: 2026-01-29 16:50:11.664867512 +0000 UTC m=+0.299030256 container died d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:50:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3db07e3b5e4ce898e7e481659b26a9340e92def1f6de79829c2af4dbbb55544e-merged.mount: Deactivated successfully.
Jan 29 11:50:11 np0005601226 podman[82132]: 2026-01-29 16:50:11.786107365 +0000 UTC m=+0.420270089 container remove d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_stonebraker, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 11:50:11 np0005601226 systemd[1]: libpod-conmon-d68550de784a129dc879c4b548270e5290f6c9abf659f4989743d4b3c1c1cd18.scope: Deactivated successfully.
Jan 29 11:50:11 np0005601226 podman[82174]: 2026-01-29 16:50:11.927371518 +0000 UTC m=+0.060194684 container create c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haibt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 11:50:11 np0005601226 systemd[1]: Started libpod-conmon-c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e.scope.
Jan 29 11:50:11 np0005601226 podman[82174]: 2026-01-29 16:50:11.893581942 +0000 UTC m=+0.026405198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:11 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40552ef1bf8aff618687ff2aa4065677dc6da0dd8a296d1c4a3738dc27d3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40552ef1bf8aff618687ff2aa4065677dc6da0dd8a296d1c4a3738dc27d3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40552ef1bf8aff618687ff2aa4065677dc6da0dd8a296d1c4a3738dc27d3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40552ef1bf8aff618687ff2aa4065677dc6da0dd8a296d1c4a3738dc27d3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40552ef1bf8aff618687ff2aa4065677dc6da0dd8a296d1c4a3738dc27d3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:12 np0005601226 podman[82174]: 2026-01-29 16:50:12.019275502 +0000 UTC m=+0.152098768 container init c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 11:50:12 np0005601226 podman[82174]: 2026-01-29 16:50:12.028821138 +0000 UTC m=+0.161644304 container start c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:12 np0005601226 podman[82174]: 2026-01-29 16:50:12.032692318 +0000 UTC m=+0.165515504 container attach c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 11:50:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:12 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:12 np0005601226 practical_haibt[82191]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:50:12 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:12 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:12 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1"} v 0)
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/614051207' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1"} : dispatch
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/614051207' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1"}]': finished
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:13 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 29 11:50:13 np0005601226 lvm[82283]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:50:13 np0005601226 lvm[82283]: VG ceph_vg0 finished
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/614051207' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1"} : dispatch
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/614051207' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1"}]': finished
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 29 11:50:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1281041631' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: stderr: got monmap epoch 1
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: --> Creating keyring file for osd.0
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 29 11:50:13 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1 --setuser ceph --setgroup ceph
Jan 29 11:50:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:14 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 29 11:50:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 29 11:50:14 np0005601226 practical_haibt[82191]: stderr: 2026-01-29T16:50:13.984+0000 7f3fa473d8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 29 11:50:14 np0005601226 practical_haibt[82191]: stderr: 2026-01-29T16:50:14.002+0000 7f3fa473d8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 29 11:50:14 np0005601226 practical_haibt[82191]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 29 11:50:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b59b9ee3-7bef-4274-a8bf-0f9cce011ae7
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7"} v 0)
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1015863880' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7"} : dispatch
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1015863880' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7"}]': finished
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:15 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:15 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:15 np0005601226 lvm[83232]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:50:15 np0005601226 lvm[83232]: VG ceph_vg1 finished
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: Cluster is now healthy
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1015863880' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7"} : dispatch
Jan 29 11:50:15 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1015863880' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7"}]': finished
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:15 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 29 11:50:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 29 11:50:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313613674' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 11:50:16 np0005601226 practical_haibt[82191]: stderr: got monmap epoch 1
Jan 29 11:50:16 np0005601226 practical_haibt[82191]: --> Creating keyring file for osd.1
Jan 29 11:50:16 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 29 11:50:16 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 29 11:50:16 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid b59b9ee3-7bef-4274-a8bf-0f9cce011ae7 --setuser ceph --setgroup ceph
Jan 29 11:50:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:16 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:18 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: stderr: 2026-01-29T16:50:16.293+0000 7f54056058c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: stderr: 2026-01-29T16:50:16.324+0000 7f54056058c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:18 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 791d3808-828d-4f85-a3de-28df49f6a6ef
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "791d3808-828d-4f85-a3de-28df49f6a6ef"} v 0)
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2421798217' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "791d3808-828d-4f85-a3de-28df49f6a6ef"} : dispatch
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2421798217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "791d3808-828d-4f85-a3de-28df49f6a6ef"}]': finished
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:19 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:19 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:19 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 29 11:50:19 np0005601226 lvm[84177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:50:19 np0005601226 lvm[84177]: VG ceph_vg2 finished
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573693414' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: stderr: got monmap epoch 1
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: --> Creating keyring file for osd.2
Jan 29 11:50:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 29 11:50:19 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 29 11:50:20 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 791d3808-828d-4f85-a3de-28df49f6a6ef --setuser ceph --setgroup ceph
Jan 29 11:50:20 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/2421798217' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "791d3808-828d-4f85-a3de-28df49f6a6ef"} : dispatch
Jan 29 11:50:20 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/2421798217' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "791d3808-828d-4f85-a3de-28df49f6a6ef"}]': finished
Jan 29 11:50:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:20 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: stderr: 2026-01-29T16:50:20.068+0000 7f3878fd48c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: stderr: 2026-01-29T16:50:20.093+0000 7f3878fd48c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 29 11:50:21 np0005601226 practical_haibt[82191]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 29 11:50:21 np0005601226 systemd[1]: libpod-c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e.scope: Deactivated successfully.
Jan 29 11:50:21 np0005601226 systemd[1]: libpod-c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e.scope: Consumed 5.786s CPU time.
Jan 29 11:50:21 np0005601226 podman[85094]: 2026-01-29 16:50:21.720141274 +0000 UTC m=+0.028483563 container died c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haibt, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 11:50:21 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6ce40552ef1bf8aff618687ff2aa4065677dc6da0dd8a296d1c4a3738dc27d3e-merged.mount: Deactivated successfully.
Jan 29 11:50:21 np0005601226 podman[85094]: 2026-01-29 16:50:21.804984589 +0000 UTC m=+0.113326918 container remove c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_haibt, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:50:21 np0005601226 systemd[1]: libpod-conmon-c26d02aa3d8d098af9320b3c71b4b6f219e8524711501546b341b5be8d62c23e.scope: Deactivated successfully.
Jan 29 11:50:22 np0005601226 podman[85171]: 2026-01-29 16:50:22.250691036 +0000 UTC m=+0.044582261 container create 31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_clarke, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:50:22 np0005601226 systemd[1]: Started libpod-conmon-31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d.scope.
Jan 29 11:50:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:22 np0005601226 podman[85171]: 2026-01-29 16:50:22.328574896 +0000 UTC m=+0.122466151 container init 31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:22 np0005601226 podman[85171]: 2026-01-29 16:50:22.236235938 +0000 UTC m=+0.030127203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:22 np0005601226 podman[85171]: 2026-01-29 16:50:22.338314928 +0000 UTC m=+0.132206183 container start 31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 11:50:22 np0005601226 podman[85171]: 2026-01-29 16:50:22.342071684 +0000 UTC m=+0.135963019 container attach 31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:22 np0005601226 compassionate_clarke[85188]: 167 167
Jan 29 11:50:22 np0005601226 systemd[1]: libpod-31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d.scope: Deactivated successfully.
Jan 29 11:50:22 np0005601226 conmon[85188]: conmon 31519c1a30d5ecdac871 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d.scope/container/memory.events
Jan 29 11:50:22 np0005601226 podman[85171]: 2026-01-29 16:50:22.34452757 +0000 UTC m=+0.138418835 container died 31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_clarke, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8ae563e9718cfe705762b097928c5e3be7fdd174a2f3372e5ea02a5e454ab729-merged.mount: Deactivated successfully.
Jan 29 11:50:22 np0005601226 podman[85171]: 2026-01-29 16:50:22.391101892 +0000 UTC m=+0.184993157 container remove 31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_clarke, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:22 np0005601226 systemd[1]: libpod-conmon-31519c1a30d5ecdac8711fe5dbfb19f752a3119a361bf0e56b240b31b8b6793d.scope: Deactivated successfully.
Jan 29 11:50:22 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:22 np0005601226 podman[85214]: 2026-01-29 16:50:22.574694444 +0000 UTC m=+0.051818955 container create 414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:22 np0005601226 systemd[1]: Started libpod-conmon-414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe.scope.
Jan 29 11:50:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:22 np0005601226 podman[85214]: 2026-01-29 16:50:22.554000994 +0000 UTC m=+0.031125505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3e302ed0be31d54eb764b4bd154683ba5883bd58ab4e0b72fea54f85ea25b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3e302ed0be31d54eb764b4bd154683ba5883bd58ab4e0b72fea54f85ea25b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3e302ed0be31d54eb764b4bd154683ba5883bd58ab4e0b72fea54f85ea25b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3e302ed0be31d54eb764b4bd154683ba5883bd58ab4e0b72fea54f85ea25b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:22 np0005601226 podman[85214]: 2026-01-29 16:50:22.67438057 +0000 UTC m=+0.151505081 container init 414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:22 np0005601226 podman[85214]: 2026-01-29 16:50:22.682602744 +0000 UTC m=+0.159727225 container start 414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:22 np0005601226 podman[85214]: 2026-01-29 16:50:22.686930048 +0000 UTC m=+0.164054639 container attach 414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 11:50:22 np0005601226 boring_shirley[85230]: {
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:    "0": [
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:        {
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "devices": [
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "/dev/loop3"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            ],
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_name": "ceph_lv0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_size": "21470642176",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "name": "ceph_lv0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "tags": {
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.crush_device_class": "",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.encrypted": "0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osd_id": "0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.type": "block",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.vdo": "0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.with_tpm": "0"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            },
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "type": "block",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "vg_name": "ceph_vg0"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:        }
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:    ],
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:    "1": [
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:        {
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "devices": [
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "/dev/loop4"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            ],
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_name": "ceph_lv1",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_size": "21470642176",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "name": "ceph_lv1",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "tags": {
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.crush_device_class": "",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.encrypted": "0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osd_id": "1",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.type": "block",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.vdo": "0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.with_tpm": "0"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            },
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "type": "block",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "vg_name": "ceph_vg1"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:        }
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:    ],
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:    "2": [
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:        {
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "devices": [
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "/dev/loop5"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            ],
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_name": "ceph_lv2",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_size": "21470642176",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "name": "ceph_lv2",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "tags": {
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.crush_device_class": "",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.encrypted": "0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osd_id": "2",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.type": "block",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.vdo": "0",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:                "ceph.with_tpm": "0"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            },
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "type": "block",
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:            "vg_name": "ceph_vg2"
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:        }
Jan 29 11:50:22 np0005601226 boring_shirley[85230]:    ]
Jan 29 11:50:22 np0005601226 boring_shirley[85230]: }
Jan 29 11:50:22 np0005601226 systemd[1]: libpod-414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe.scope: Deactivated successfully.
Jan 29 11:50:22 np0005601226 podman[85214]: 2026-01-29 16:50:22.975941115 +0000 UTC m=+0.453065596 container died 414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_shirley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2f3e302ed0be31d54eb764b4bd154683ba5883bd58ab4e0b72fea54f85ea25b3-merged.mount: Deactivated successfully.
Jan 29 11:50:23 np0005601226 podman[85214]: 2026-01-29 16:50:23.014126466 +0000 UTC m=+0.491250937 container remove 414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:23 np0005601226 systemd[1]: libpod-conmon-414797c87c52f61bb35fc854c5284c07e60df0f59cca15281eb787e4c95871fe.scope: Deactivated successfully.
Jan 29 11:50:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 29 11:50:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 29 11:50:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:23 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 29 11:50:23 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 29 11:50:23 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 29 11:50:23 np0005601226 podman[85342]: 2026-01-29 16:50:23.567648429 +0000 UTC m=+0.046017414 container create cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dewdney, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 11:50:23 np0005601226 systemd[1]: Started libpod-conmon-cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838.scope.
Jan 29 11:50:23 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:23 np0005601226 podman[85342]: 2026-01-29 16:50:23.642391803 +0000 UTC m=+0.120760808 container init cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 11:50:23 np0005601226 podman[85342]: 2026-01-29 16:50:23.548128095 +0000 UTC m=+0.026497120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:23 np0005601226 podman[85342]: 2026-01-29 16:50:23.649883685 +0000 UTC m=+0.128252660 container start cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dewdney, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:50:23 np0005601226 thirsty_dewdney[85359]: 167 167
Jan 29 11:50:23 np0005601226 systemd[1]: libpod-cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838.scope: Deactivated successfully.
Jan 29 11:50:23 np0005601226 podman[85342]: 2026-01-29 16:50:23.655950372 +0000 UTC m=+0.134319377 container attach cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dewdney, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:23 np0005601226 podman[85342]: 2026-01-29 16:50:23.656424248 +0000 UTC m=+0.134793233 container died cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dewdney, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a4321c9154beba7e1be0cc6e9957ac06756aeb7905f8d86ff8fa2bf46fccbfe6-merged.mount: Deactivated successfully.
Jan 29 11:50:23 np0005601226 podman[85342]: 2026-01-29 16:50:23.716100354 +0000 UTC m=+0.194469339 container remove cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_dewdney, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:23 np0005601226 systemd[1]: libpod-conmon-cab1409a056524b670a51b32420ebed74fd3a7e9368c6b5b242c73ec8d28b838.scope: Deactivated successfully.
Jan 29 11:50:23 np0005601226 podman[85391]: 2026-01-29 16:50:23.945169715 +0000 UTC m=+0.062873297 container create 339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:23 np0005601226 systemd[1]: Started libpod-conmon-339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a.scope.
Jan 29 11:50:24 np0005601226 podman[85391]: 2026-01-29 16:50:23.916506288 +0000 UTC m=+0.034209960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:24 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0db3211da87d2f2240e59915ad15149093ba40a1a119eef29902075bd03f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0db3211da87d2f2240e59915ad15149093ba40a1a119eef29902075bd03f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0db3211da87d2f2240e59915ad15149093ba40a1a119eef29902075bd03f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0db3211da87d2f2240e59915ad15149093ba40a1a119eef29902075bd03f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa0db3211da87d2f2240e59915ad15149093ba40a1a119eef29902075bd03f5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:24 np0005601226 podman[85391]: 2026-01-29 16:50:24.04742041 +0000 UTC m=+0.165124032 container init 339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 11:50:24 np0005601226 podman[85391]: 2026-01-29 16:50:24.060945459 +0000 UTC m=+0.178649071 container start 339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:24 np0005601226 podman[85391]: 2026-01-29 16:50:24.072101074 +0000 UTC m=+0.189804676 container attach 339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:24 np0005601226 ceph-mon[75233]: Deploying daemon osd.0 on compute-0
Jan 29 11:50:24 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test[85407]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 29 11:50:24 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test[85407]:                            [--no-systemd] [--no-tmpfs]
Jan 29 11:50:24 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test[85407]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 29 11:50:24 np0005601226 systemd[1]: libpod-339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a.scope: Deactivated successfully.
Jan 29 11:50:24 np0005601226 podman[85391]: 2026-01-29 16:50:24.265531951 +0000 UTC m=+0.383235563 container died 339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:24 np0005601226 systemd[1]: var-lib-containers-storage-overlay-afa0db3211da87d2f2240e59915ad15149093ba40a1a119eef29902075bd03f5-merged.mount: Deactivated successfully.
Jan 29 11:50:24 np0005601226 podman[85391]: 2026-01-29 16:50:24.302085083 +0000 UTC m=+0.419788655 container remove 339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate-test, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:24 np0005601226 systemd[1]: libpod-conmon-339d89c329c6cbebda769ef72f8badd101bd467ee563f0e972fd579415d9842a.scope: Deactivated successfully.
Jan 29 11:50:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:24 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:24 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:24 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:24 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:24 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:24 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:24 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:24 np0005601226 systemd[1]: Starting Ceph osd.0 for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:50:25 np0005601226 podman[85567]: 2026-01-29 16:50:25.201693948 +0000 UTC m=+0.044210189 container create 84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:25 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ab396be3a943077897518da527e0dbaceb4060b13d7d44fee6b1019648b7ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ab396be3a943077897518da527e0dbaceb4060b13d7d44fee6b1019648b7ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ab396be3a943077897518da527e0dbaceb4060b13d7d44fee6b1019648b7ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ab396be3a943077897518da527e0dbaceb4060b13d7d44fee6b1019648b7ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ab396be3a943077897518da527e0dbaceb4060b13d7d44fee6b1019648b7ed/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:25 np0005601226 podman[85567]: 2026-01-29 16:50:25.279342692 +0000 UTC m=+0.121859003 container init 84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 11:50:25 np0005601226 podman[85567]: 2026-01-29 16:50:25.184706572 +0000 UTC m=+0.027222853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:25 np0005601226 podman[85567]: 2026-01-29 16:50:25.292454507 +0000 UTC m=+0.134970778 container start 84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:50:25 np0005601226 podman[85567]: 2026-01-29 16:50:25.297098881 +0000 UTC m=+0.139615192 container attach 84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 11:50:25 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:25 np0005601226 bash[85567]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:25 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:25 np0005601226 bash[85567]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:26 np0005601226 lvm[85668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:50:26 np0005601226 lvm[85669]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:50:26 np0005601226 lvm[85669]: VG ceph_vg1 finished
Jan 29 11:50:26 np0005601226 lvm[85668]: VG ceph_vg0 finished
Jan 29 11:50:26 np0005601226 lvm[85671]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:50:26 np0005601226 lvm[85671]: VG ceph_vg2 finished
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:26 np0005601226 bash[85567]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 29 11:50:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 11:50:26 np0005601226 bash[85567]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 29 11:50:26 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate[85582]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 29 11:50:26 np0005601226 bash[85567]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 29 11:50:26 np0005601226 systemd[1]: libpod-84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1.scope: Deactivated successfully.
Jan 29 11:50:26 np0005601226 systemd[1]: libpod-84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1.scope: Consumed 1.327s CPU time.
Jan 29 11:50:26 np0005601226 podman[85567]: 2026-01-29 16:50:26.410836376 +0000 UTC m=+1.253352607 container died 84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 11:50:26 np0005601226 systemd[1]: var-lib-containers-storage-overlay-72ab396be3a943077897518da527e0dbaceb4060b13d7d44fee6b1019648b7ed-merged.mount: Deactivated successfully.
Jan 29 11:50:26 np0005601226 podman[85567]: 2026-01-29 16:50:26.457579304 +0000 UTC m=+1.300095545 container remove 84ebc8854e5b56fa7f30c5153bb0abd853d428882c68d4b41a35fbd14dedd1b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 11:50:26 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:26 np0005601226 podman[85839]: 2026-01-29 16:50:26.672430955 +0000 UTC m=+0.043078146 container create a3a212aa0fc1c38253ab94d27fb402d76be8ec14461d36c7690b17aae7716f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:50:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23540e633db183f67923d201f2ae9260068fa3a2ef5326a85134836bb40f14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23540e633db183f67923d201f2ae9260068fa3a2ef5326a85134836bb40f14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23540e633db183f67923d201f2ae9260068fa3a2ef5326a85134836bb40f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23540e633db183f67923d201f2ae9260068fa3a2ef5326a85134836bb40f14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23540e633db183f67923d201f2ae9260068fa3a2ef5326a85134836bb40f14/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:26 np0005601226 podman[85839]: 2026-01-29 16:50:26.737217893 +0000 UTC m=+0.107865154 container init a3a212aa0fc1c38253ab94d27fb402d76be8ec14461d36c7690b17aae7716f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 11:50:26 np0005601226 podman[85839]: 2026-01-29 16:50:26.652903784 +0000 UTC m=+0.023551055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:26 np0005601226 podman[85839]: 2026-01-29 16:50:26.751933488 +0000 UTC m=+0.122580699 container start a3a212aa0fc1c38253ab94d27fb402d76be8ec14461d36c7690b17aae7716f8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:50:26 np0005601226 bash[85839]: a3a212aa0fc1c38253ab94d27fb402d76be8ec14461d36c7690b17aae7716f8c
Jan 29 11:50:26 np0005601226 systemd[1]: Started Ceph osd.0 for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: pidfile_write: ignore empty --pid-file
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:26 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 29 11:50:26 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec400 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:26 np0005601226 ceph-osd[85858]: bdev(0x55d1d23ec000 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: load: jerasure load: lrc 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d23edc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount shared_bdev_used = 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Git sha 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: DB SUMMARY
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: DB Session ID:  ZNT0FWRTXZ7ZENX2X9US
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                     Options.env: 0x55d1d227dea0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                Options.info_log: 0x55d1d32d88a0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.write_buffer_manager: 0x55d1d317eb40
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.row_cache: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                              Options.wal_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.wal_compression: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_background_jobs: 4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Compression algorithms supported:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kZSTD supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d2281a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d2281a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d2281a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4902724c-5ef6-4a3a-a65f-3682fa221cdb
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705427213692, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705427215272, "job": 1, "event": "recovery_finished"}
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: freelist init
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: freelist _read_cfg
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs umount
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) close
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bdev(0x55d1d308d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluefs mount shared_bdev_used = 27262976
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Git sha 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: DB SUMMARY
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: DB Session ID:  ZNT0FWRTXZ7ZENX2X9UT
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                     Options.env: 0x55d1d227dce0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                Options.info_log: 0x55d1d32d8960
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.write_buffer_manager: 0x55d1d317eb40
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.row_cache: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                              Options.wal_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.wal_compression: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_background_jobs: 4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Compression algorithms supported:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kZSTD supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d8bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d22818d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d2281a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d2281a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d1d32d90c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d1d2281a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4902724c-5ef6-4a3a-a65f-3682fa221cdb
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705427270866, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705427276380, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705427, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4902724c-5ef6-4a3a-a65f-3682fa221cdb", "db_session_id": "ZNT0FWRTXZ7ZENX2X9UT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705427280329, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705427, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4902724c-5ef6-4a3a-a65f-3682fa221cdb", "db_session_id": "ZNT0FWRTXZ7ZENX2X9UT", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705427283842, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705427, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4902724c-5ef6-4a3a-a65f-3682fa221cdb", "db_session_id": "ZNT0FWRTXZ7ZENX2X9UT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705427285506, "job": 1, "event": "recovery_finished"}
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d1d32da000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: DB pointer 0x55d1d3492000
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1d22818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1d22818d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1d22818d0#2 capacity: 460.80 MB usag
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: _get_class not permitted to load lua
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: _get_class not permitted to load sdk
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0 0 load_pgs
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0 0 load_pgs opened 0 pgs
Jan 29 11:50:27 np0005601226 ceph-osd[85858]: osd.0 0 log_to_monitors true
Jan 29 11:50:27 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0[85854]: 2026-01-29T16:50:27.322+0000 7f6e61c4e8c0 -1 osd.0 0 log_to_monitors true
Jan 29 11:50:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 29 11:50:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 29 11:50:27 np0005601226 podman[86397]: 2026-01-29 16:50:27.371634318 +0000 UTC m=+0.034767442 container create c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bhabha, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:27 np0005601226 systemd[1]: Started libpod-conmon-c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a.scope.
Jan 29 11:50:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:27 np0005601226 podman[86397]: 2026-01-29 16:50:27.446701556 +0000 UTC m=+0.109834680 container init c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:27 np0005601226 podman[86397]: 2026-01-29 16:50:27.355278177 +0000 UTC m=+0.018411291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:27 np0005601226 podman[86397]: 2026-01-29 16:50:27.45357301 +0000 UTC m=+0.116706124 container start c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bhabha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 11:50:27 np0005601226 podman[86397]: 2026-01-29 16:50:27.458145159 +0000 UTC m=+0.121278293 container attach c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bhabha, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:27 np0005601226 beautiful_bhabha[86413]: 167 167
Jan 29 11:50:27 np0005601226 systemd[1]: libpod-c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a.scope: Deactivated successfully.
Jan 29 11:50:27 np0005601226 podman[86397]: 2026-01-29 16:50:27.459900618 +0000 UTC m=+0.123033772 container died c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bhabha, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 11:50:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c1e01cd344ccdbcaed68641c3c9448d2b6a97b6522bb942cb4fca0e21770a7cc-merged.mount: Deactivated successfully.
Jan 29 11:50:27 np0005601226 podman[86397]: 2026-01-29 16:50:27.503125087 +0000 UTC m=+0.166258231 container remove c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:27 np0005601226 systemd[1]: libpod-conmon-c79c046f3fa98193bcb8f91d09c96e2a0bcfc4e1125187200dc6de1bc06daf2a.scope: Deactivated successfully.
Jan 29 11:50:27 np0005601226 podman[86443]: 2026-01-29 16:50:27.731135749 +0000 UTC m=+0.054978111 container create a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 29 11:50:27 np0005601226 systemd[1]: Started libpod-conmon-a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a.scope.
Jan 29 11:50:27 np0005601226 podman[86443]: 2026-01-29 16:50:27.708737117 +0000 UTC m=+0.032579509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b26b454af5405009d9b8a6b1722cfb80802950d7112477bdaa1a445350425e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b26b454af5405009d9b8a6b1722cfb80802950d7112477bdaa1a445350425e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b26b454af5405009d9b8a6b1722cfb80802950d7112477bdaa1a445350425e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b26b454af5405009d9b8a6b1722cfb80802950d7112477bdaa1a445350425e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b26b454af5405009d9b8a6b1722cfb80802950d7112477bdaa1a445350425e/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:27 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:27 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:27 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 29 11:50:27 np0005601226 ceph-mon[75233]: Deploying daemon osd.1 on compute-0
Jan 29 11:50:27 np0005601226 ceph-mon[75233]: from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 29 11:50:27 np0005601226 podman[86443]: 2026-01-29 16:50:27.843128218 +0000 UTC m=+0.166970610 container init a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 29 11:50:27 np0005601226 podman[86443]: 2026-01-29 16:50:27.851111784 +0000 UTC m=+0.174954136 container start a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:27 np0005601226 podman[86443]: 2026-01-29 16:50:27.855325213 +0000 UTC m=+0.179167635 container attach a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:50:28 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test[86459]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 29 11:50:28 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test[86459]:                            [--no-systemd] [--no-tmpfs]
Jan 29 11:50:28 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test[86459]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 29 11:50:28 np0005601226 systemd[1]: libpod-a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a.scope: Deactivated successfully.
Jan 29 11:50:28 np0005601226 podman[86443]: 2026-01-29 16:50:28.04455833 +0000 UTC m=+0.368400692 container died a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:28 np0005601226 systemd[1]: var-lib-containers-storage-overlay-14b26b454af5405009d9b8a6b1722cfb80802950d7112477bdaa1a445350425e-merged.mount: Deactivated successfully.
Jan 29 11:50:28 np0005601226 podman[86443]: 2026-01-29 16:50:28.094054606 +0000 UTC m=+0.417896928 container remove a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:28 np0005601226 systemd[1]: libpod-conmon-a9fdbec6e86091efb39a127cda245c18c573e7ec492020db548d84c72a91733a.scope: Deactivated successfully.
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:28 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:28 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:28 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:28 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:28 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 29 11:50:28 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 29 11:50:28 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:28 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:28 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:28 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:28 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:28 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:28 np0005601226 systemd[1]: Starting Ceph osd.1 for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 29 11:50:29 np0005601226 ceph-osd[85858]: osd.0 0 done with init, starting boot process
Jan 29 11:50:29 np0005601226 ceph-osd[85858]: osd.0 0 start_boot
Jan 29 11:50:29 np0005601226 ceph-osd[85858]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 29 11:50:29 np0005601226 ceph-osd[85858]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 29 11:50:29 np0005601226 ceph-osd[85858]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 29 11:50:29 np0005601226 ceph-osd[85858]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 29 11:50:29 np0005601226 ceph-osd[85858]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 29 11:50:29 np0005601226 podman[86623]: 2026-01-29 16:50:29.155182298 +0000 UTC m=+0.064402987 container create 72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 11:50:29 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:29 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:29 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:29 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3740124297; not ready for session (expect reconnect)
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:29 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:29 np0005601226 podman[86623]: 2026-01-29 16:50:29.126352665 +0000 UTC m=+0.035573364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1410a64320b386093841dce1fe822843dadd1b22c50b11289050d2657d7455/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1410a64320b386093841dce1fe822843dadd1b22c50b11289050d2657d7455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1410a64320b386093841dce1fe822843dadd1b22c50b11289050d2657d7455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1410a64320b386093841dce1fe822843dadd1b22c50b11289050d2657d7455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1410a64320b386093841dce1fe822843dadd1b22c50b11289050d2657d7455/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:29 np0005601226 podman[86623]: 2026-01-29 16:50:29.29880898 +0000 UTC m=+0.208029679 container init 72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:29 np0005601226 podman[86623]: 2026-01-29 16:50:29.308230025 +0000 UTC m=+0.217450684 container start 72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:29 np0005601226 podman[86623]: 2026-01-29 16:50:29.329615069 +0000 UTC m=+0.238835738 container attach 72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:29 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:29 np0005601226 bash[86623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:29 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:29 np0005601226 bash[86623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:30 np0005601226 lvm[86724]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:50:30 np0005601226 lvm[86724]: VG ceph_vg1 finished
Jan 29 11:50:30 np0005601226 lvm[86721]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:50:30 np0005601226 lvm[86721]: VG ceph_vg0 finished
Jan 29 11:50:30 np0005601226 lvm[86726]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:50:30 np0005601226 lvm[86726]: VG ceph_vg2 finished
Jan 29 11:50:30 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3740124297; not ready for session (expect reconnect)
Jan 29 11:50:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:30 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:30 np0005601226 ceph-mon[75233]: from='osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:30 np0005601226 bash[86623]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 11:50:30 np0005601226 bash[86623]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 29 11:50:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate[86638]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 29 11:50:30 np0005601226 bash[86623]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 29 11:50:30 np0005601226 systemd[1]: libpod-72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7.scope: Deactivated successfully.
Jan 29 11:50:30 np0005601226 systemd[1]: libpod-72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7.scope: Consumed 1.533s CPU time.
Jan 29 11:50:30 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:30 np0005601226 podman[86839]: 2026-01-29 16:50:30.586372318 +0000 UTC m=+0.043116447 container died 72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:30 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cf1410a64320b386093841dce1fe822843dadd1b22c50b11289050d2657d7455-merged.mount: Deactivated successfully.
Jan 29 11:50:30 np0005601226 podman[86839]: 2026-01-29 16:50:30.745297971 +0000 UTC m=+0.202042040 container remove 72ed2071fc79351c032260ef4a7b9a390a20aee662df2c454fa2ec1f2f399da7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:31 np0005601226 podman[86898]: 2026-01-29 16:50:31.02249668 +0000 UTC m=+0.075540502 container create 5904e6a7a5f4cd853bd547e633f6914a7d7fddb8a617f251091bcd261f73b994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 11:50:31 np0005601226 podman[86898]: 2026-01-29 16:50:30.981694139 +0000 UTC m=+0.034737951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639626dc268f65944f0d4728c850a21468037e809b73eac00201820f92dd08ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639626dc268f65944f0d4728c850a21468037e809b73eac00201820f92dd08ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639626dc268f65944f0d4728c850a21468037e809b73eac00201820f92dd08ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639626dc268f65944f0d4728c850a21468037e809b73eac00201820f92dd08ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639626dc268f65944f0d4728c850a21468037e809b73eac00201820f92dd08ce/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:31 np0005601226 podman[86898]: 2026-01-29 16:50:31.137879674 +0000 UTC m=+0.190923506 container init 5904e6a7a5f4cd853bd547e633f6914a7d7fddb8a617f251091bcd261f73b994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:50:31 np0005601226 podman[86898]: 2026-01-29 16:50:31.142227237 +0000 UTC m=+0.195271039 container start 5904e6a7a5f4cd853bd547e633f6914a7d7fddb8a617f251091bcd261f73b994 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:31 np0005601226 bash[86898]: 5904e6a7a5f4cd853bd547e633f6914a7d7fddb8a617f251091bcd261f73b994
Jan 29 11:50:31 np0005601226 systemd[1]: Started Ceph osd.1 for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:50:31 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3740124297; not ready for session (expect reconnect)
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:31 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: pidfile_write: ignore empty --pid-file
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0400 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:31 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 29 11:50:31 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c0000 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: load: jerasure load: lrc 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f5109c1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount shared_bdev_used = 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Git sha 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: DB SUMMARY
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: DB Session ID:  QBTL02ZKS2MNJMHL0RHY
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                     Options.env: 0x55f510851ea0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                Options.info_log: 0x55f5118a28a0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.write_buffer_manager: 0x55f5108b6b40
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.row_cache: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                              Options.wal_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.wal_compression: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_background_jobs: 4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Compression algorithms supported:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kZSTD supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f510855a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f510855a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f510855a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7c3d27c9-2286-46e9-88b2-17051a4ef250
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705431569236, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705431571059, "job": 1, "event": "recovery_finished"}
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: freelist init
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: freelist _read_cfg
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs umount
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) close
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bdev(0x55f511657800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluefs mount shared_bdev_used = 27262976
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Git sha 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: DB SUMMARY
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: DB Session ID:  QBTL02ZKS2MNJMHL0RHZ
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                     Options.env: 0x55f511a72a80
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                Options.info_log: 0x55f5118a2960
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.write_buffer_manager: 0x55f5108b7900
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.row_cache: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                              Options.wal_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.wal_compression: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_background_jobs: 4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Compression algorithms supported:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kZSTD supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a2bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f5108558d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a30c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f510855a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a30c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f510855a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f5118a30c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f510855a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7c3d27c9-2286-46e9-88b2-17051a4ef250
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705431622369, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705431629551, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705431, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7c3d27c9-2286-46e9-88b2-17051a4ef250", "db_session_id": "QBTL02ZKS2MNJMHL0RHZ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705431655727, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705431, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7c3d27c9-2286-46e9-88b2-17051a4ef250", "db_session_id": "QBTL02ZKS2MNJMHL0RHZ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705431659070, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705431, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7c3d27c9-2286-46e9-88b2-17051a4ef250", "db_session_id": "QBTL02ZKS2MNJMHL0RHZ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705431681175, "job": 1, "event": "recovery_finished"}
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f511abc000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: DB pointer 0x55f511a5c000
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f5108558d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f5108558d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f5108558d0#2 capacity: 460.80 MB usag
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: _get_class not permitted to load lua
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: _get_class not permitted to load sdk
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1 0 load_pgs
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1 0 load_pgs opened 0 pgs
Jan 29 11:50:31 np0005601226 ceph-osd[86917]: osd.1 0 log_to_monitors true
Jan 29 11:50:31 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1[86913]: 2026-01-29T16:50:31.785+0000 7fe8bb22a8c0 -1 osd.1 0 log_to_monitors true
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 29 11:50:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 29 11:50:31 np0005601226 podman[87460]: 2026-01-29 16:50:31.897642906 +0000 UTC m=+0.066097775 container create acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_easley, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:31 np0005601226 systemd[1]: Started libpod-conmon-acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104.scope.
Jan 29 11:50:31 np0005601226 podman[87460]: 2026-01-29 16:50:31.867446344 +0000 UTC m=+0.035901273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:31 np0005601226 podman[87460]: 2026-01-29 16:50:31.987300535 +0000 UTC m=+0.155755424 container init acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_easley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:31 np0005601226 podman[87460]: 2026-01-29 16:50:31.99172497 +0000 UTC m=+0.160179839 container start acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 11:50:31 np0005601226 sweet_easley[87477]: 167 167
Jan 29 11:50:31 np0005601226 systemd[1]: libpod-acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104.scope: Deactivated successfully.
Jan 29 11:50:32 np0005601226 podman[87460]: 2026-01-29 16:50:32.009545383 +0000 UTC m=+0.178000272 container attach acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_easley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 29 11:50:32 np0005601226 podman[87460]: 2026-01-29 16:50:32.010020546 +0000 UTC m=+0.178475425 container died acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_easley, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:50:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-221bdb7c6512065e1687a20a41af4f61a61a415e4c65bba7a02ed64fbe2226d7-merged.mount: Deactivated successfully.
Jan 29 11:50:32 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3740124297; not ready for session (expect reconnect)
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:32 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:32 np0005601226 podman[87460]: 2026-01-29 16:50:32.218355732 +0000 UTC m=+0.386810631 container remove acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:32 np0005601226 systemd[1]: libpod-conmon-acd5902a82fcf163f0a46e60b79ca77601ba58485a70b35d8c3450b57578d104.scope: Deactivated successfully.
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: Deploying daemon osd.2 on compute-0
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 29 11:50:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:32 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:32 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:32 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:32 np0005601226 podman[87506]: 2026-01-29 16:50:32.502847917 +0000 UTC m=+0.060958681 container create 739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:32 np0005601226 systemd[1]: Started libpod-conmon-739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e.scope.
Jan 29 11:50:32 np0005601226 ceph-mgr[75527]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 29 11:50:32 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe7c66da996ff5deee82e83b7c9ed186d3f5896241ea8ccf2adbc917d797da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe7c66da996ff5deee82e83b7c9ed186d3f5896241ea8ccf2adbc917d797da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe7c66da996ff5deee82e83b7c9ed186d3f5896241ea8ccf2adbc917d797da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe7c66da996ff5deee82e83b7c9ed186d3f5896241ea8ccf2adbc917d797da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4fe7c66da996ff5deee82e83b7c9ed186d3f5896241ea8ccf2adbc917d797da/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:32 np0005601226 podman[87506]: 2026-01-29 16:50:32.471476162 +0000 UTC m=+0.029587026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:32 np0005601226 podman[87506]: 2026-01-29 16:50:32.581036533 +0000 UTC m=+0.139147307 container init 739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.558 iops: 8590.859 elapsed_sec: 0.349
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: log_channel(cluster) log [WRN] : OSD bench result of 8590.858639 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 0 waiting for initial osdmap
Jan 29 11:50:32 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0[85854]: 2026-01-29T16:50:32.584+0000 7f6e5e3e2640 -1 osd.0 0 waiting for initial osdmap
Jan 29 11:50:32 np0005601226 podman[87506]: 2026-01-29 16:50:32.588299027 +0000 UTC m=+0.146409791 container start 739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 11:50:32 np0005601226 podman[87506]: 2026-01-29 16:50:32.59442175 +0000 UTC m=+0.152532514 container attach 739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 11:50:32 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-0[85854]: 2026-01-29T16:50:32.607+0000 7f6e589d5640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 29 11:50:32 np0005601226 ceph-osd[85858]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 29 11:50:32 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test[87523]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 29 11:50:32 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test[87523]:                            [--no-systemd] [--no-tmpfs]
Jan 29 11:50:32 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test[87523]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 29 11:50:32 np0005601226 systemd[1]: libpod-739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e.scope: Deactivated successfully.
Jan 29 11:50:32 np0005601226 podman[87506]: 2026-01-29 16:50:32.764902309 +0000 UTC m=+0.323013103 container died 739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a4fe7c66da996ff5deee82e83b7c9ed186d3f5896241ea8ccf2adbc917d797da-merged.mount: Deactivated successfully.
Jan 29 11:50:32 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 29 11:50:32 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 29 11:50:32 np0005601226 podman[87506]: 2026-01-29 16:50:32.798025673 +0000 UTC m=+0.356136437 container remove 739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 11:50:32 np0005601226 systemd[1]: libpod-conmon-739d738d036bd0374c9b494e815d5d7257173213ccc1a5f879bb17135312a59e.scope: Deactivated successfully.
Jan 29 11:50:33 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:33 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:33 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:33 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3740124297; not ready for session (expect reconnect)
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:33 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 29 11:50:33 np0005601226 systemd[1]: Reloading.
Jan 29 11:50:33 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:50:33 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Jan 29 11:50:33 np0005601226 ceph-osd[86917]: osd.1 0 done with init, starting boot process
Jan 29 11:50:33 np0005601226 ceph-osd[86917]: osd.1 0 start_boot
Jan 29 11:50:33 np0005601226 ceph-osd[86917]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 29 11:50:33 np0005601226 ceph-osd[86917]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 29 11:50:33 np0005601226 ceph-osd[86917]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 29 11:50:33 np0005601226 ceph-osd[86917]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 29 11:50:33 np0005601226 ceph-osd[86917]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297] boot
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Jan 29 11:50:33 np0005601226 ceph-osd[85858]: osd.0 10 state: booting -> active
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:33 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: OSD bench result of 8590.858639 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 11:50:33 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:33 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/88388076; not ready for session (expect reconnect)
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:33 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:33 np0005601226 systemd[1]: Starting Ceph osd.2 for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:50:33 np0005601226 podman[87681]: 2026-01-29 16:50:33.71217605 +0000 UTC m=+0.052099201 container create c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:33 np0005601226 podman[87681]: 2026-01-29 16:50:33.683419948 +0000 UTC m=+0.023343119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:33 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6f7dc21d397d6642006f3ea251bbdad155c665d1ff75f0ac15a462c42b420/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6f7dc21d397d6642006f3ea251bbdad155c665d1ff75f0ac15a462c42b420/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6f7dc21d397d6642006f3ea251bbdad155c665d1ff75f0ac15a462c42b420/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6f7dc21d397d6642006f3ea251bbdad155c665d1ff75f0ac15a462c42b420/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2b6f7dc21d397d6642006f3ea251bbdad155c665d1ff75f0ac15a462c42b420/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:33 np0005601226 podman[87681]: 2026-01-29 16:50:33.822071909 +0000 UTC m=+0.161995080 container init c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 11:50:33 np0005601226 podman[87681]: 2026-01-29 16:50:33.828576533 +0000 UTC m=+0.168499704 container start c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:33 np0005601226 podman[87681]: 2026-01-29 16:50:33.842956138 +0000 UTC m=+0.182879309 container attach c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:33 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:33 np0005601226 bash[87681]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 11:50:34 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/88388076; not ready for session (expect reconnect)
Jan 29 11:50:34 np0005601226 lvm[87782]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:50:34 np0005601226 lvm[87782]: VG ceph_vg1 finished
Jan 29 11:50:34 np0005601226 lvm[87781]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:50:34 np0005601226 lvm[87781]: VG ceph_vg0 finished
Jan 29 11:50:34 np0005601226 lvm[87784]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:50:34 np0005601226 lvm[87784]: VG ceph_vg2 finished
Jan 29 11:50:34 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] creating mgr pool
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 29 11:50:34 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 11:50:34 np0005601226 bash[87681]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: from='osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: osd.0 [v2:192.168.122.100:6802/3740124297,v1:192.168.122.100:6803/3740124297] boot
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:34 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:34 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 11:50:34 np0005601226 bash[87681]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 29 11:50:34 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate[87696]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 29 11:50:34 np0005601226 bash[87681]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 29 11:50:34 np0005601226 systemd[1]: libpod-c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013.scope: Deactivated successfully.
Jan 29 11:50:34 np0005601226 systemd[1]: libpod-c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013.scope: Consumed 1.278s CPU time.
Jan 29 11:50:34 np0005601226 conmon[87696]: conmon c499dc8b506fce606631 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013.scope/container/memory.events
Jan 29 11:50:34 np0005601226 podman[87681]: 2026-01-29 16:50:34.921824852 +0000 UTC m=+1.261748013 container died c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 29 11:50:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:34 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f2b6f7dc21d397d6642006f3ea251bbdad155c665d1ff75f0ac15a462c42b420-merged.mount: Deactivated successfully.
Jan 29 11:50:35 np0005601226 podman[87681]: 2026-01-29 16:50:35.055129531 +0000 UTC m=+1.395052692 container remove c499dc8b506fce6066316c3880c94f46879e8ffb4d5f33cb35f753e500a42013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2-activate, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:35 np0005601226 podman[87939]: 2026-01-29 16:50:35.259814915 +0000 UTC m=+0.074661437 container create 238789ad62449b41a47bfe29a872e83abc39837e9ebcbbcea27d659dbf318b89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:35 np0005601226 podman[87939]: 2026-01-29 16:50:35.2039994 +0000 UTC m=+0.018845902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f931fa844f64d395cc50cfe76b75ac8dae4f1421eefd9c5fc6a7bea51ba474b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f931fa844f64d395cc50cfe76b75ac8dae4f1421eefd9c5fc6a7bea51ba474b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f931fa844f64d395cc50cfe76b75ac8dae4f1421eefd9c5fc6a7bea51ba474b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f931fa844f64d395cc50cfe76b75ac8dae4f1421eefd9c5fc6a7bea51ba474b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f931fa844f64d395cc50cfe76b75ac8dae4f1421eefd9c5fc6a7bea51ba474b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:35 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/88388076; not ready for session (expect reconnect)
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:35 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:35 np0005601226 podman[87939]: 2026-01-29 16:50:35.407074449 +0000 UTC m=+0.221921011 container init 238789ad62449b41a47bfe29a872e83abc39837e9ebcbbcea27d659dbf318b89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:35 np0005601226 podman[87939]: 2026-01-29 16:50:35.414738065 +0000 UTC m=+0.229584577 container start 238789ad62449b41a47bfe29a872e83abc39837e9ebcbbcea27d659dbf318b89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:35 np0005601226 bash[87939]: 238789ad62449b41a47bfe29a872e83abc39837e9ebcbbcea27d659dbf318b89
Jan 29 11:50:35 np0005601226 systemd[1]: Started Ceph osd.2 for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: pidfile_write: ignore empty --pid-file
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18400 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f18000 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: load: jerasure load: lrc 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 29 11:50:35 np0005601226 ceph-osd[85858]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 29 11:50:35 np0005601226 ceph-osd[85858]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 29 11:50:35 np0005601226 ceph-osd[85858]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:35 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:35 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 29 11:50:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a688f19c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount shared_bdev_used = 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Git sha 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: DB SUMMARY
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: DB Session ID:  LZ5GGV1QUSUDLFGH8I1W
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                     Options.env: 0x55a688da9ea0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                Options.info_log: 0x55a689dfa8a0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.write_buffer_manager: 0x55a688e0eb40
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.row_cache: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                              Options.wal_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.wal_compression: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_background_jobs: 4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Compression algorithms supported:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kZSTD supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dada30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dada30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfac80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dada30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5da4b76a-606e-4c1a-a55d-283a034bd94f
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705435846044, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705435847710, "job": 1, "event": "recovery_finished"}
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: freelist init
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: freelist _read_cfg
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs umount
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) close
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bdev(0x55a689baf800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluefs mount shared_bdev_used = 27262976
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: RocksDB version: 7.9.2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Git sha 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: DB SUMMARY
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: DB Session ID:  LZ5GGV1QUSUDLFGH8I1X
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: CURRENT file:  CURRENT
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: IDENTITY file:  IDENTITY
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.error_if_exists: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.create_if_missing: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.paranoid_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                     Options.env: 0x55a689fcaa80
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                Options.info_log: 0x55a689dfa960
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_file_opening_threads: 16
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                              Options.statistics: (nil)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.use_fsync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.max_log_file_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.allow_fallocate: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.use_direct_reads: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.create_missing_column_families: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                              Options.db_log_dir: 
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                                 Options.wal_dir: db.wal
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.advise_random_on_open: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.write_buffer_manager: 0x55a688e0f900
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                            Options.rate_limiter: (nil)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.unordered_write: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.row_cache: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                              Options.wal_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.allow_ingest_behind: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.two_write_queues: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.manual_wal_flush: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.wal_compression: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.atomic_flush: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.log_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.allow_data_in_errors: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.db_host_id: __hostname__
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_background_jobs: 4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_background_compactions: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_subcompactions: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.max_open_files: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.bytes_per_sync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.max_background_flushes: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Compression algorithms supported:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kZSTD supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kXpressCompression supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kBZip2Compression supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kLZ4Compression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kZlibCompression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: #011kSnappyCompression supported: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfabc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfabc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfabc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfabc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfabc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfabc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfabc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dad8d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfb0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dada30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfb0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dada30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:           Options.merge_operator: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.compaction_filter_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.sst_partitioner_factory: None
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a689dfb0c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a688dada30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.write_buffer_size: 16777216
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.max_write_buffer_number: 64
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.compression: LZ4
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.num_levels: 7
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.level: 32767
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.compression_opts.strategy: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                  Options.compression_opts.enabled: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.arena_block_size: 1048576
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.disable_auto_compactions: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.inplace_update_support: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.bloom_locality: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                    Options.max_successive_merges: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.paranoid_file_checks: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.force_consistency_checks: 1
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.report_bg_io_stats: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                               Options.ttl: 2592000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                       Options.enable_blob_files: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                           Options.min_blob_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                          Options.blob_file_size: 268435456
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb:                Options.blob_file_starting_level: 0
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5da4b76a-606e-4c1a-a55d-283a034bd94f
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705435911604, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705435939775, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705435, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5da4b76a-606e-4c1a-a55d-283a034bd94f", "db_session_id": "LZ5GGV1QUSUDLFGH8I1X", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705435979045, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705435, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5da4b76a-606e-4c1a-a55d-283a034bd94f", "db_session_id": "LZ5GGV1QUSUDLFGH8I1X", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:35 np0005601226 ceph-osd[87958]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705435982995, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705435, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5da4b76a-606e-4c1a-a55d-283a034bd94f", "db_session_id": "LZ5GGV1QUSUDLFGH8I1X", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:50:35 np0005601226 podman[88262]: 2026-01-29 16:50:35.982455609 +0000 UTC m=+0.073993638 container create 448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705436006058, "job": 1, "event": "recovery_finished"}
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 29 11:50:36 np0005601226 podman[88262]: 2026-01-29 16:50:35.927297704 +0000 UTC m=+0.018835743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:36 np0005601226 systemd[1]: Started libpod-conmon-448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170.scope.
Jan 29 11:50:36 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:36 np0005601226 podman[88262]: 2026-01-29 16:50:36.086806803 +0000 UTC m=+0.178344872 container init 448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mendeleev, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 11:50:36 np0005601226 podman[88262]: 2026-01-29 16:50:36.095511888 +0000 UTC m=+0.187049917 container start 448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mendeleev, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 11:50:36 np0005601226 wonderful_mendeleev[88454]: 167 167
Jan 29 11:50:36 np0005601226 systemd[1]: libpod-448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170.scope: Deactivated successfully.
Jan 29 11:50:36 np0005601226 podman[88262]: 2026-01-29 16:50:36.109232656 +0000 UTC m=+0.200770675 container attach 448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:50:36 np0005601226 podman[88262]: 2026-01-29 16:50:36.109526564 +0000 UTC m=+0.201064583 container died 448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mendeleev, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a68a014000
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: rocksdb: DB pointer 0x55a689fb4000
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.3 total, 0.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a688dad8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a688dad8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.3 total, 0.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a688dad8d0#2 capacity: 460.80 MB usag
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: _get_class not permitted to load lua
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: _get_class not permitted to load sdk
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: osd.2 0 load_pgs
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: osd.2 0 load_pgs opened 0 pgs
Jan 29 11:50:36 np0005601226 ceph-osd[87958]: osd.2 0 log_to_monitors true
Jan 29 11:50:36 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2[87954]: 2026-01-29T16:50:36.148+0000 7f96acb4b8c0 -1 osd.2 0 log_to_monitors true
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 29 11:50:36 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2cc333865a903c6565f100e5929ffa52a3bce6aa5db26ff090d2fdbe1e4fd054-merged.mount: Deactivated successfully.
Jan 29 11:50:36 np0005601226 podman[88262]: 2026-01-29 16:50:36.202454265 +0000 UTC m=+0.293992304 container remove 448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 11:50:36 np0005601226 systemd[1]: libpod-conmon-448812271f88d1abfb25ce2909c5fa29da6eebe237b407184827c91375157170.scope: Deactivated successfully.
Jan 29 11:50:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 11:50:36 np0005601226 podman[88513]: 2026-01-29 16:50:36.388482742 +0000 UTC m=+0.068622737 container create e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_borg, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:50:36 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/88388076; not ready for session (expect reconnect)
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:36 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:36 np0005601226 podman[88513]: 2026-01-29 16:50:36.347288911 +0000 UTC m=+0.027428916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:36 np0005601226 systemd[1]: Started libpod-conmon-e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376.scope.
Jan 29 11:50:36 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc39e764771518165a6f5c1f4bd85e36d690399afa68bd4fd67df224d8303b83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc39e764771518165a6f5c1f4bd85e36d690399afa68bd4fd67df224d8303b83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc39e764771518165a6f5c1f4bd85e36d690399afa68bd4fd67df224d8303b83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc39e764771518165a6f5c1f4bd85e36d690399afa68bd4fd67df224d8303b83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:36 np0005601226 podman[88513]: 2026-01-29 16:50:36.500943894 +0000 UTC m=+0.181083889 container init e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_borg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:36 np0005601226 podman[88513]: 2026-01-29 16:50:36.507548811 +0000 UTC m=+0.187688836 container start e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_borg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 11:50:36 np0005601226 podman[88513]: 2026-01-29 16:50:36.519355634 +0000 UTC m=+0.199495619 container attach e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:36 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:36 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 29 11:50:36 np0005601226 ceph-mon[75233]: from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 29 11:50:37 np0005601226 lvm[88604]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:50:37 np0005601226 lvm[88604]: VG ceph_vg0 finished
Jan 29 11:50:37 np0005601226 lvm[88606]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:50:37 np0005601226 lvm[88606]: VG ceph_vg1 finished
Jan 29 11:50:37 np0005601226 lvm[88607]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:50:37 np0005601226 lvm[88607]: VG ceph_vg2 finished
Jan 29 11:50:37 np0005601226 unruffled_borg[88529]: {}
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 34.950 iops: 8947.109 elapsed_sec: 0.335
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: log_channel(cluster) log [WRN] : OSD bench result of 8947.109040 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 0 waiting for initial osdmap
Jan 29 11:50:37 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1[86913]: 2026-01-29T16:50:37.333+0000 7fe8b71ac640 -1 osd.1 0 waiting for initial osdmap
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 29 11:50:37 np0005601226 systemd[1]: libpod-e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376.scope: Deactivated successfully.
Jan 29 11:50:37 np0005601226 systemd[1]: libpod-e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376.scope: Consumed 1.154s CPU time.
Jan 29 11:50:37 np0005601226 podman[88513]: 2026-01-29 16:50:37.349329986 +0000 UTC m=+1.029470001 container died e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 29 11:50:37 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-1[86913]: 2026-01-29T16:50:37.365+0000 7fe8b1fb1640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 29 11:50:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fc39e764771518165a6f5c1f4bd85e36d690399afa68bd4fd67df224d8303b83-merged.mount: Deactivated successfully.
Jan 29 11:50:37 np0005601226 podman[88513]: 2026-01-29 16:50:37.394090368 +0000 UTC m=+1.074230373 container remove e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:37 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/88388076; not ready for session (expect reconnect)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:37 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 29 11:50:37 np0005601226 systemd[1]: libpod-conmon-e0c749469eb95d3ec346a326c81f72d60c086cbfad8b01bac55ac03ad2048376.scope: Deactivated successfully.
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: osd.2 0 done with init, starting boot process
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: osd.2 0 start_boot
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 29 11:50:37 np0005601226 ceph-osd[87958]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076] boot
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:37 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:37 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1041515297; not ready for session (expect reconnect)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:37 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 14 state: booting -> active
Jan 29 11:50:37 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: OSD bench result of 8947.109040 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: from='osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 29 11:50:37 np0005601226 ceph-mon[75233]: osd.1 [v2:192.168.122.100:6806/88388076,v1:192.168.122.100:6807/88388076] boot
Jan 29 11:50:38 np0005601226 podman[88742]: 2026-01-29 16:50:38.146466062 +0000 UTC m=+0.099288192 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:50:38 np0005601226 podman[88742]: 2026-01-29 16:50:38.272035673 +0000 UTC m=+0.224857713 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 11:50:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 29 11:50:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:50:38
Jan 29 11:50:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:50:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 29 11:50:38 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1041515297; not ready for session (expect reconnect)
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:38 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:38 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:39 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] creating main.db for devicehealth
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:39 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] Check health
Jan 29 11:50:39 np0005601226 ceph-mgr[75527]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 29 11:50:39 np0005601226 podman[88967]: 2026-01-29 16:50:39.559571173 +0000 UTC m=+0.052188093 container create c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 11:50:39 np0005601226 systemd[1]: Started libpod-conmon-c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f.scope.
Jan 29 11:50:39 np0005601226 podman[88967]: 2026-01-29 16:50:39.527846007 +0000 UTC m=+0.020462957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:39 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:39 np0005601226 podman[88967]: 2026-01-29 16:50:39.678511338 +0000 UTC m=+0.171128278 container init c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_driscoll, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 11:50:39 np0005601226 podman[88967]: 2026-01-29 16:50:39.688109549 +0000 UTC m=+0.180726489 container start c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 11:50:39 np0005601226 condescending_driscoll[88983]: 167 167
Jan 29 11:50:39 np0005601226 systemd[1]: libpod-c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f.scope: Deactivated successfully.
Jan 29 11:50:39 np0005601226 podman[88967]: 2026-01-29 16:50:39.713481914 +0000 UTC m=+0.206098844 container attach c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:39 np0005601226 podman[88967]: 2026-01-29 16:50:39.714618636 +0000 UTC m=+0.207235566 container died c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:50:39 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c0654799ea1d72bc193564ae168a1239a11de6c3fb2c28fe6d3366c6b18c3013-merged.mount: Deactivated successfully.
Jan 29 11:50:39 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1041515297; not ready for session (expect reconnect)
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:39 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:39 np0005601226 podman[88967]: 2026-01-29 16:50:39.852940388 +0000 UTC m=+0.345557348 container remove c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:39 np0005601226 systemd[1]: libpod-conmon-c01a6c49623f2ec88931f79314a805399b53b96f00c5d2e575a82837459f546f.scope: Deactivated successfully.
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:39 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 29 11:50:39 np0005601226 ceph-mon[75233]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 29 11:50:40 np0005601226 podman[89010]: 2026-01-29 16:50:40.019343492 +0000 UTC m=+0.065523129 container create 2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_driscoll, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.zvopdr(active, since 61s)
Jan 29 11:50:40 np0005601226 systemd[1]: Started libpod-conmon-2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb.scope.
Jan 29 11:50:40 np0005601226 podman[89010]: 2026-01-29 16:50:39.979506268 +0000 UTC m=+0.025685935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:40 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fe2490404f9bf4b26c792c812a93a8f4b386d9d681cc83d88222e829de70ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fe2490404f9bf4b26c792c812a93a8f4b386d9d681cc83d88222e829de70ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fe2490404f9bf4b26c792c812a93a8f4b386d9d681cc83d88222e829de70ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fe2490404f9bf4b26c792c812a93a8f4b386d9d681cc83d88222e829de70ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:40 np0005601226 podman[89010]: 2026-01-29 16:50:40.134727167 +0000 UTC m=+0.180906804 container init 2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 11:50:40 np0005601226 podman[89010]: 2026-01-29 16:50:40.143553616 +0000 UTC m=+0.189733253 container start 2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_driscoll, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:40 np0005601226 podman[89010]: 2026-01-29 16:50:40.157001935 +0000 UTC m=+0.203181562 container attach 2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_driscoll, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]: [
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:    {
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "available": false,
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "being_replaced": false,
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "ceph_device_lvm": false,
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "lsm_data": {},
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "lvs": [],
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "path": "/dev/sr0",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "rejected_reasons": [
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "Has a FileSystem",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "Insufficient space (<5GB)"
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        ],
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        "sys_api": {
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "actuators": null,
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "device_nodes": [
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:                "sr0"
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            ],
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "devname": "sr0",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "human_readable_size": "482.00 KB",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "id_bus": "ata",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "model": "QEMU DVD-ROM",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "nr_requests": "2",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "parent": "/dev/sr0",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "partitions": {},
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "path": "/dev/sr0",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "removable": "1",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "rev": "2.5+",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "ro": "0",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "rotational": "1",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "sas_address": "",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "sas_device_handle": "",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "scheduler_mode": "mq-deadline",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "sectors": 0,
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "sectorsize": "2048",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "size": 493568.0,
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "support_discard": "2048",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "type": "disk",
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:            "vendor": "QEMU"
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:        }
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]:    }
Jan 29 11:50:40 np0005601226 confident_driscoll[89026]: ]
Jan 29 11:50:40 np0005601226 systemd[1]: libpod-2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb.scope: Deactivated successfully.
Jan 29 11:50:40 np0005601226 podman[89010]: 2026-01-29 16:50:40.742720117 +0000 UTC m=+0.788899744 container died 2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1041515297; not ready for session (expect reconnect)
Jan 29 11:50:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:40 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:40 np0005601226 systemd[1]: var-lib-containers-storage-overlay-66fe2490404f9bf4b26c792c812a93a8f4b386d9d681cc83d88222e829de70ff-merged.mount: Deactivated successfully.
Jan 29 11:50:40 np0005601226 podman[89010]: 2026-01-29 16:50:40.931753789 +0000 UTC m=+0.977933426 container remove 2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 11:50:40 np0005601226 systemd[1]: libpod-conmon-2ce5ce0f2a65c33fda252c794727a5e83146ecf81b0349a11e2d1d295e6d71eb.scope: Deactivated successfully.
Jan 29 11:50:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.zvopdr(active, since 62s)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43686k
Jan 29 11:50:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43686k
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 29 11:50:41 np0005601226 ceph-mgr[75527]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Jan 29 11:50:41 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.388 iops: 9059.206 elapsed_sec: 0.331
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: log_channel(cluster) log [WRN] : OSD bench result of 9059.205687 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 0 waiting for initial osdmap
Jan 29 11:50:41 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2[87954]: 2026-01-29T16:50:41.548+0000 7f96a8acd640 -1 osd.2 0 waiting for initial osdmap
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 16 check_osdmap_features require_osd_release unknown -> tentacle
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 16 set_numa_affinity not setting numa affinity
Jan 29 11:50:41 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-osd-2[87954]: 2026-01-29T16:50:41.577+0000 7f96a38d2640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 29 11:50:41 np0005601226 ceph-osd[87958]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 29 11:50:41 np0005601226 podman[89918]: 2026-01-29 16:50:41.617911834 +0000 UTC m=+0.045243737 container create ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:41 np0005601226 python3[89905]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:41 np0005601226 systemd[1]: Started libpod-conmon-ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9.scope.
Jan 29 11:50:41 np0005601226 podman[89918]: 2026-01-29 16:50:41.593792594 +0000 UTC m=+0.021124507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:41 np0005601226 podman[89918]: 2026-01-29 16:50:41.719320925 +0000 UTC m=+0.146652868 container init ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ardinghelli, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:41 np0005601226 podman[89934]: 2026-01-29 16:50:41.723374489 +0000 UTC m=+0.068385019 container create 780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a (image=quay.io/ceph/ceph:v20, name=nostalgic_swartz, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:41 np0005601226 podman[89918]: 2026-01-29 16:50:41.724731297 +0000 UTC m=+0.152063200 container start ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:50:41 np0005601226 podman[89918]: 2026-01-29 16:50:41.728110343 +0000 UTC m=+0.155442266 container attach ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:50:41 np0005601226 recursing_ardinghelli[89941]: 167 167
Jan 29 11:50:41 np0005601226 systemd[1]: libpod-ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9.scope: Deactivated successfully.
Jan 29 11:50:41 np0005601226 podman[89918]: 2026-01-29 16:50:41.730001896 +0000 UTC m=+0.157333799 container died ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ardinghelli, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:50:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b9951347854104be07fcdf7252eb5c97bdb7ce22ef84760871f69ec8536e3e6f-merged.mount: Deactivated successfully.
Jan 29 11:50:41 np0005601226 systemd[1]: Started libpod-conmon-780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a.scope.
Jan 29 11:50:41 np0005601226 podman[89918]: 2026-01-29 16:50:41.776044565 +0000 UTC m=+0.203376498 container remove ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 11:50:41 np0005601226 podman[89934]: 2026-01-29 16:50:41.701670306 +0000 UTC m=+0.046680846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:41 np0005601226 systemd[1]: libpod-conmon-ab0c0639ef7f20ec7b0200b903bc6e7e37b57be20aec05e8d8d301b38fa0d7b9.scope: Deactivated successfully.
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914b53fb4591c65fe2615a1f1e4a7b9e21095c4f3bf547d5a81b3a4e247f6b3f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914b53fb4591c65fe2615a1f1e4a7b9e21095c4f3bf547d5a81b3a4e247f6b3f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914b53fb4591c65fe2615a1f1e4a7b9e21095c4f3bf547d5a81b3a4e247f6b3f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:41 np0005601226 podman[89934]: 2026-01-29 16:50:41.826789756 +0000 UTC m=+0.171800296 container init 780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a (image=quay.io/ceph/ceph:v20, name=nostalgic_swartz, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:41 np0005601226 podman[89934]: 2026-01-29 16:50:41.830710447 +0000 UTC m=+0.175720987 container start 780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a (image=quay.io/ceph/ceph:v20, name=nostalgic_swartz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 11:50:41 np0005601226 ceph-mgr[75527]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1041515297; not ready for session (expect reconnect)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:41 np0005601226 ceph-mgr[75527]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 29 11:50:41 np0005601226 podman[89934]: 2026-01-29 16:50:41.833804364 +0000 UTC m=+0.178814904 container attach 780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a (image=quay.io/ceph/ceph:v20, name=nostalgic_swartz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:41 np0005601226 podman[89978]: 2026-01-29 16:50:41.91693934 +0000 UTC m=+0.047590964 container create b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:41 np0005601226 systemd[1]: Started libpod-conmon-b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454.scope.
Jan 29 11:50:41 np0005601226 podman[89978]: 2026-01-29 16:50:41.89035944 +0000 UTC m=+0.021011134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38511587db73d3c5a170bd8202a0ae264819d700362a6950fb083ecbb0c579e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38511587db73d3c5a170bd8202a0ae264819d700362a6950fb083ecbb0c579e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38511587db73d3c5a170bd8202a0ae264819d700362a6950fb083ecbb0c579e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38511587db73d3c5a170bd8202a0ae264819d700362a6950fb083ecbb0c579e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38511587db73d3c5a170bd8202a0ae264819d700362a6950fb083ecbb0c579e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:42 np0005601226 podman[89978]: 2026-01-29 16:50:42.013027149 +0000 UTC m=+0.143678793 container init b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 29 11:50:42 np0005601226 podman[89978]: 2026-01-29 16:50:42.022788935 +0000 UTC m=+0.153440539 container start b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 11:50:42 np0005601226 podman[89978]: 2026-01-29 16:50:42.028154356 +0000 UTC m=+0.158806040 container attach b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297] boot
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 29 11:50:42 np0005601226 ceph-osd[87958]: osd.2 17 state: booting -> active
Jan 29 11:50:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 11:50:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871770188' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 11:50:42 np0005601226 nostalgic_swartz[89967]: 
Jan 29 11:50:42 np0005601226 nostalgic_swartz[89967]: {"fsid":"cc5c72e3-31e0-58b9-8731-456117d38f4a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":87,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1769705442,"num_in_osds":3,"osd_in_since":1769705419,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":474648576,"bytes_avail":42466635776,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-29T16:49:11:643666+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-29T16:50:40.323321+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 29 11:50:42 np0005601226 systemd[1]: libpod-780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a.scope: Deactivated successfully.
Jan 29 11:50:42 np0005601226 podman[89934]: 2026-01-29 16:50:42.367720485 +0000 UTC m=+0.712731015 container died 780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a (image=quay.io/ceph/ceph:v20, name=nostalgic_swartz, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 11:50:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay-914b53fb4591c65fe2615a1f1e4a7b9e21095c4f3bf547d5a81b3a4e247f6b3f-merged.mount: Deactivated successfully.
Jan 29 11:50:42 np0005601226 epic_austin[90014]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:50:42 np0005601226 epic_austin[90014]: --> All data devices are unavailable
Jan 29 11:50:42 np0005601226 systemd[1]: libpod-b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454.scope: Deactivated successfully.
Jan 29 11:50:42 np0005601226 podman[89978]: 2026-01-29 16:50:42.51784012 +0000 UTC m=+0.648491764 container died b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 11:50:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay-38511587db73d3c5a170bd8202a0ae264819d700362a6950fb083ecbb0c579e7-merged.mount: Deactivated successfully.
Jan 29 11:50:42 np0005601226 podman[89978]: 2026-01-29 16:50:42.605352488 +0000 UTC m=+0.736004092 container remove b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_austin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:42 np0005601226 systemd[1]: libpod-conmon-b8f38d4343acbf89dc03042c1afe830d527a4fa4d092ef07202d40d55b010454.scope: Deactivated successfully.
Jan 29 11:50:42 np0005601226 podman[89934]: 2026-01-29 16:50:42.64301735 +0000 UTC m=+0.988027910 container remove 780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a (image=quay.io/ceph/ceph:v20, name=nostalgic_swartz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:42 np0005601226 systemd[1]: libpod-conmon-780111bc8152a23f99e81e33d2717d3caf62b8a1385e0b19a70175cf4d01393a.scope: Deactivated successfully.
Jan 29 11:50:43 np0005601226 podman[90149]: 2026-01-29 16:50:43.055561538 +0000 UTC m=+0.039563427 container create 09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:43 np0005601226 python3[90136]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:43 np0005601226 ceph-mon[75233]: Adjusting osd_memory_target on compute-0 to 43686k
Jan 29 11:50:43 np0005601226 ceph-mon[75233]: Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Jan 29 11:50:43 np0005601226 ceph-mon[75233]: OSD bench result of 9059.205687 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 29 11:50:43 np0005601226 ceph-mon[75233]: osd.2 [v2:192.168.122.100:6810/1041515297,v1:192.168.122.100:6811/1041515297] boot
Jan 29 11:50:43 np0005601226 systemd[1]: Started libpod-conmon-09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682.scope.
Jan 29 11:50:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:43 np0005601226 podman[90166]: 2026-01-29 16:50:43.12794602 +0000 UTC m=+0.044656821 container create 0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880 (image=quay.io/ceph/ceph:v20, name=optimistic_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 11:50:43 np0005601226 podman[90149]: 2026-01-29 16:50:43.133752553 +0000 UTC m=+0.117754492 container init 09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 11:50:43 np0005601226 podman[90149]: 2026-01-29 16:50:43.039956028 +0000 UTC m=+0.023957957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:43 np0005601226 podman[90149]: 2026-01-29 16:50:43.141752408 +0000 UTC m=+0.125754317 container start 09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:43 np0005601226 jolly_liskov[90174]: 167 167
Jan 29 11:50:43 np0005601226 systemd[1]: libpod-09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682.scope: Deactivated successfully.
Jan 29 11:50:43 np0005601226 podman[90149]: 2026-01-29 16:50:43.149351803 +0000 UTC m=+0.133353742 container attach 09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:50:43 np0005601226 podman[90149]: 2026-01-29 16:50:43.149845457 +0000 UTC m=+0.133847396 container died 09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 11:50:43 np0005601226 systemd[1]: Started libpod-conmon-0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880.scope.
Jan 29 11:50:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-43c575451513618cce9fc7381b0126207d678bd890a964a676dbfde403a70ebc-merged.mount: Deactivated successfully.
Jan 29 11:50:43 np0005601226 podman[90149]: 2026-01-29 16:50:43.196261286 +0000 UTC m=+0.180263185 container remove 09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_liskov, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:50:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:43 np0005601226 podman[90166]: 2026-01-29 16:50:43.103796398 +0000 UTC m=+0.020507229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:43 np0005601226 systemd[1]: libpod-conmon-09a6e29b48f028cb3c152adde08a6829b169cfb08b4eae42887104c49f1d9682.scope: Deactivated successfully.
Jan 29 11:50:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df35d1b2c80264ad434ec03962634042c3560631e133111ce4c4a4e924bd712d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df35d1b2c80264ad434ec03962634042c3560631e133111ce4c4a4e924bd712d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:43 np0005601226 podman[90166]: 2026-01-29 16:50:43.253454089 +0000 UTC m=+0.170164900 container init 0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880 (image=quay.io/ceph/ceph:v20, name=optimistic_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:43 np0005601226 podman[90166]: 2026-01-29 16:50:43.258111792 +0000 UTC m=+0.174822593 container start 0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880 (image=quay.io/ceph/ceph:v20, name=optimistic_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 11:50:43 np0005601226 podman[90166]: 2026-01-29 16:50:43.266078216 +0000 UTC m=+0.182789017 container attach 0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880 (image=quay.io/ceph/ceph:v20, name=optimistic_mendeleev, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 11:50:43 np0005601226 podman[90208]: 2026-01-29 16:50:43.325565844 +0000 UTC m=+0.033835595 container create fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 11:50:43 np0005601226 systemd[1]: Started libpod-conmon-fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8.scope.
Jan 29 11:50:43 np0005601226 podman[90208]: 2026-01-29 16:50:43.311779685 +0000 UTC m=+0.020049456 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a52fdb4f24c85ea0cdc46df7ee87c22bb41eed8af60306da7cc022ebd1ced00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a52fdb4f24c85ea0cdc46df7ee87c22bb41eed8af60306da7cc022ebd1ced00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a52fdb4f24c85ea0cdc46df7ee87c22bb41eed8af60306da7cc022ebd1ced00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a52fdb4f24c85ea0cdc46df7ee87c22bb41eed8af60306da7cc022ebd1ced00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:43 np0005601226 podman[90208]: 2026-01-29 16:50:43.429819404 +0000 UTC m=+0.138089175 container init fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:43 np0005601226 podman[90208]: 2026-01-29 16:50:43.435938668 +0000 UTC m=+0.144208419 container start fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:50:43 np0005601226 podman[90208]: 2026-01-29 16:50:43.481725799 +0000 UTC m=+0.189995570 container attach fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 11:50:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 11:50:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3058196051' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]: {
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:    "0": [
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:        {
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "devices": [
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "/dev/loop3"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            ],
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_name": "ceph_lv0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_size": "21470642176",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "name": "ceph_lv0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "tags": {
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.crush_device_class": "",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.encrypted": "0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osd_id": "0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.type": "block",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.vdo": "0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.with_tpm": "0"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            },
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "type": "block",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "vg_name": "ceph_vg0"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:        }
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:    ],
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:    "1": [
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:        {
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "devices": [
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "/dev/loop4"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            ],
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_name": "ceph_lv1",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_size": "21470642176",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "name": "ceph_lv1",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "tags": {
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.crush_device_class": "",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.encrypted": "0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osd_id": "1",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.type": "block",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.vdo": "0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.with_tpm": "0"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            },
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "type": "block",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "vg_name": "ceph_vg1"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:        }
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:    ],
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:    "2": [
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:        {
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "devices": [
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "/dev/loop5"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            ],
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_name": "ceph_lv2",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_size": "21470642176",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "name": "ceph_lv2",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "tags": {
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.crush_device_class": "",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.encrypted": "0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osd_id": "2",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.type": "block",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.vdo": "0",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:                "ceph.with_tpm": "0"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            },
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "type": "block",
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:            "vg_name": "ceph_vg2"
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:        }
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]:    ]
Jan 29 11:50:43 np0005601226 nifty_archimedes[90243]: }
Jan 29 11:50:43 np0005601226 systemd[1]: libpod-fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8.scope: Deactivated successfully.
Jan 29 11:50:43 np0005601226 podman[90208]: 2026-01-29 16:50:43.719762973 +0000 UTC m=+0.428032734 container died fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:50:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3a52fdb4f24c85ea0cdc46df7ee87c22bb41eed8af60306da7cc022ebd1ced00-merged.mount: Deactivated successfully.
Jan 29 11:50:43 np0005601226 podman[90208]: 2026-01-29 16:50:43.767315714 +0000 UTC m=+0.475585465 container remove fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_archimedes, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 11:50:43 np0005601226 systemd[1]: libpod-conmon-fb961b954edf4b9bcd8123cc578f3dc472dd871025a683aaa56985cc592ad7c8.scope: Deactivated successfully.
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3058196051' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3058196051' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 29 11:50:44 np0005601226 optimistic_mendeleev[90194]: pool 'vms' created
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 29 11:50:44 np0005601226 systemd[1]: libpod-0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880.scope: Deactivated successfully.
Jan 29 11:50:44 np0005601226 podman[90166]: 2026-01-29 16:50:44.133744861 +0000 UTC m=+1.050455662 container died 0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880 (image=quay.io/ceph/ceph:v20, name=optimistic_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 11:50:44 np0005601226 systemd[1]: var-lib-containers-storage-overlay-df35d1b2c80264ad434ec03962634042c3560631e133111ce4c4a4e924bd712d-merged.mount: Deactivated successfully.
Jan 29 11:50:44 np0005601226 podman[90327]: 2026-01-29 16:50:44.163262624 +0000 UTC m=+0.055643981 container create cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:50:44 np0005601226 podman[90166]: 2026-01-29 16:50:44.177173696 +0000 UTC m=+1.093884487 container remove 0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880 (image=quay.io/ceph/ceph:v20, name=optimistic_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:50:44 np0005601226 systemd[1]: libpod-conmon-0fa7cb9cbb3ad6dbcfd98387947a81100649fd4a378e978e081937faeaa16880.scope: Deactivated successfully.
Jan 29 11:50:44 np0005601226 systemd[1]: Started libpod-conmon-cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b.scope.
Jan 29 11:50:44 np0005601226 podman[90327]: 2026-01-29 16:50:44.12803985 +0000 UTC m=+0.020421327 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:44 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:44 np0005601226 podman[90327]: 2026-01-29 16:50:44.259585701 +0000 UTC m=+0.151967068 container init cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kare, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:50:44 np0005601226 podman[90327]: 2026-01-29 16:50:44.26737049 +0000 UTC m=+0.159751857 container start cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 11:50:44 np0005601226 sad_kare[90357]: 167 167
Jan 29 11:50:44 np0005601226 systemd[1]: libpod-cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b.scope: Deactivated successfully.
Jan 29 11:50:44 np0005601226 podman[90327]: 2026-01-29 16:50:44.285678866 +0000 UTC m=+0.178060233 container attach cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:44 np0005601226 podman[90327]: 2026-01-29 16:50:44.286387807 +0000 UTC m=+0.178769174 container died cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v42: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 11:50:44 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8a7fa339bea74d576fda1b04ef44103768773993a9a4d5348a8afadfe2e35cbb-merged.mount: Deactivated successfully.
Jan 29 11:50:44 np0005601226 podman[90327]: 2026-01-29 16:50:44.400725362 +0000 UTC m=+0.293106729 container remove cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_kare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 11:50:44 np0005601226 systemd[1]: libpod-conmon-cc8398c58aa8451804a31a87eebc9a6eb051487ccec79b0a6db65d17b080a97b.scope: Deactivated successfully.
Jan 29 11:50:44 np0005601226 python3[90398]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:44 np0005601226 podman[90402]: 2026-01-29 16:50:44.529565816 +0000 UTC m=+0.055956959 container create 40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5 (image=quay.io/ceph/ceph:v20, name=cranky_mendel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:44 np0005601226 systemd[1]: Started libpod-conmon-40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5.scope.
Jan 29 11:50:44 np0005601226 podman[90417]: 2026-01-29 16:50:44.569679007 +0000 UTC m=+0.071569339 container create e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_borg, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 11:50:44 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f8e8e80a89d008348c9fd6fed6dea9c856d5e72654d0ad2dc92c593bb35508/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f8e8e80a89d008348c9fd6fed6dea9c856d5e72654d0ad2dc92c593bb35508/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:44 np0005601226 podman[90402]: 2026-01-29 16:50:44.497593295 +0000 UTC m=+0.023984418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:44 np0005601226 podman[90402]: 2026-01-29 16:50:44.608667687 +0000 UTC m=+0.135058810 container init 40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5 (image=quay.io/ceph/ceph:v20, name=cranky_mendel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:44 np0005601226 podman[90402]: 2026-01-29 16:50:44.614175713 +0000 UTC m=+0.140566816 container start 40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5 (image=quay.io/ceph/ceph:v20, name=cranky_mendel, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:50:44 np0005601226 systemd[1]: Started libpod-conmon-e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a.scope.
Jan 29 11:50:44 np0005601226 podman[90402]: 2026-01-29 16:50:44.623775834 +0000 UTC m=+0.150166967 container attach 40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5 (image=quay.io/ceph/ceph:v20, name=cranky_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 11:50:44 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:44 np0005601226 podman[90417]: 2026-01-29 16:50:44.542041589 +0000 UTC m=+0.043931941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719ae5f7beb420e214081ca5d46eb85f792a915bf7aa0e9445b3aab3cf16d6e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719ae5f7beb420e214081ca5d46eb85f792a915bf7aa0e9445b3aab3cf16d6e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719ae5f7beb420e214081ca5d46eb85f792a915bf7aa0e9445b3aab3cf16d6e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719ae5f7beb420e214081ca5d46eb85f792a915bf7aa0e9445b3aab3cf16d6e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:44 np0005601226 podman[90417]: 2026-01-29 16:50:44.66015385 +0000 UTC m=+0.162044192 container init e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_borg, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:44 np0005601226 podman[90417]: 2026-01-29 16:50:44.666623663 +0000 UTC m=+0.168513985 container start e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_borg, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:50:44 np0005601226 podman[90417]: 2026-01-29 16:50:44.670145822 +0000 UTC m=+0.172036244 container attach e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_borg, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 11:50:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1271518728' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1271518728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 29 11:50:45 np0005601226 cranky_mendel[90436]: pool 'volumes' created
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 29 11:50:45 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:50:45 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3058196051' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1271518728' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:45 np0005601226 systemd[1]: libpod-40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5.scope: Deactivated successfully.
Jan 29 11:50:45 np0005601226 podman[90524]: 2026-01-29 16:50:45.155960116 +0000 UTC m=+0.024235195 container died 40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5 (image=quay.io/ceph/ceph:v20, name=cranky_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 11:50:45 np0005601226 systemd[1]: var-lib-containers-storage-overlay-20f8e8e80a89d008348c9fd6fed6dea9c856d5e72654d0ad2dc92c593bb35508-merged.mount: Deactivated successfully.
Jan 29 11:50:45 np0005601226 podman[90524]: 2026-01-29 16:50:45.185347445 +0000 UTC m=+0.053622504 container remove 40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5 (image=quay.io/ceph/ceph:v20, name=cranky_mendel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:45 np0005601226 systemd[1]: libpod-conmon-40ea4f8c314ac40b6c462e23153259cc39439517fe6069de8c904a7905f7d5c5.scope: Deactivated successfully.
Jan 29 11:50:45 np0005601226 lvm[90575]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:50:45 np0005601226 lvm[90575]: VG ceph_vg0 finished
Jan 29 11:50:45 np0005601226 lvm[90581]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:50:45 np0005601226 lvm[90581]: VG ceph_vg1 finished
Jan 29 11:50:45 np0005601226 lvm[90584]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:50:45 np0005601226 lvm[90584]: VG ceph_vg2 finished
Jan 29 11:50:45 np0005601226 charming_borg[90443]: {}
Jan 29 11:50:45 np0005601226 systemd[1]: libpod-e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a.scope: Deactivated successfully.
Jan 29 11:50:45 np0005601226 podman[90417]: 2026-01-29 16:50:45.442959961 +0000 UTC m=+0.944850313 container died e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 11:50:45 np0005601226 systemd[1]: libpod-e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a.scope: Consumed 1.043s CPU time.
Jan 29 11:50:45 np0005601226 python3[90583]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:45 np0005601226 systemd[1]: var-lib-containers-storage-overlay-719ae5f7beb420e214081ca5d46eb85f792a915bf7aa0e9445b3aab3cf16d6e6-merged.mount: Deactivated successfully.
Jan 29 11:50:45 np0005601226 podman[90417]: 2026-01-29 16:50:45.487382295 +0000 UTC m=+0.989272627 container remove e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_borg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 11:50:45 np0005601226 systemd[1]: libpod-conmon-e5f3783eb71478c3746e5ae2c04254dd59c91994a8d7c2d7d104a910fa3df36a.scope: Deactivated successfully.
Jan 29 11:50:45 np0005601226 podman[90594]: 2026-01-29 16:50:45.520656462 +0000 UTC m=+0.044033852 container create 50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d (image=quay.io/ceph/ceph:v20, name=zen_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:45 np0005601226 systemd[1]: Started libpod-conmon-50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d.scope.
Jan 29 11:50:45 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:45 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb55505682381278e51686e7676d703711a9a0fcd252c8752332243a187be967/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:45 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb55505682381278e51686e7676d703711a9a0fcd252c8752332243a187be967/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:45 np0005601226 podman[90594]: 2026-01-29 16:50:45.595393221 +0000 UTC m=+0.118770681 container init 50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d (image=quay.io/ceph/ceph:v20, name=zen_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 11:50:45 np0005601226 podman[90594]: 2026-01-29 16:50:45.501451071 +0000 UTC m=+0.024828471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:45 np0005601226 podman[90594]: 2026-01-29 16:50:45.603462919 +0000 UTC m=+0.126840299 container start 50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d (image=quay.io/ceph/ceph:v20, name=zen_galileo, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:50:45 np0005601226 podman[90594]: 2026-01-29 16:50:45.607581455 +0000 UTC m=+0.130958865 container attach 50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d (image=quay.io/ceph/ceph:v20, name=zen_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/592480364' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/592480364' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 29 11:50:46 np0005601226 zen_galileo[90618]: pool 'backups' created
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1271518728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/592480364' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/592480364' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:46 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:50:46 np0005601226 systemd[1]: libpod-50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d.scope: Deactivated successfully.
Jan 29 11:50:46 np0005601226 podman[90594]: 2026-01-29 16:50:46.134302023 +0000 UTC m=+0.657679403 container died 50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d (image=quay.io/ceph/ceph:v20, name=zen_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:46 np0005601226 podman[90761]: 2026-01-29 16:50:46.154890523 +0000 UTC m=+0.078330041 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 11:50:46 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fb55505682381278e51686e7676d703711a9a0fcd252c8752332243a187be967-merged.mount: Deactivated successfully.
Jan 29 11:50:46 np0005601226 podman[90594]: 2026-01-29 16:50:46.181590437 +0000 UTC m=+0.704967817 container remove 50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d (image=quay.io/ceph/ceph:v20, name=zen_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:46 np0005601226 systemd[1]: libpod-conmon-50ee7e25a44414085d2db769b889308e487aabe53ce676e15e5e5eafce51e37d.scope: Deactivated successfully.
Jan 29 11:50:46 np0005601226 podman[90761]: 2026-01-29 16:50:46.234665073 +0000 UTC m=+0.158104541 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v45: 4 pgs: 3 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 11:50:46 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:50:46 np0005601226 python3[90847]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:46 np0005601226 podman[90880]: 2026-01-29 16:50:46.498940078 +0000 UTC m=+0.036272064 container create 62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:46 np0005601226 systemd[1]: Started libpod-conmon-62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52.scope.
Jan 29 11:50:46 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e140543d33a5eef0779468d60d2c5e18f23503bcfedc965aed112975d2356c43/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e140543d33a5eef0779468d60d2c5e18f23503bcfedc965aed112975d2356c43/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:46 np0005601226 podman[90880]: 2026-01-29 16:50:46.481890037 +0000 UTC m=+0.019222053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:46 np0005601226 podman[90880]: 2026-01-29 16:50:46.583445753 +0000 UTC m=+0.120777729 container init 62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:50:46 np0005601226 podman[90880]: 2026-01-29 16:50:46.588845814 +0000 UTC m=+0.126177830 container start 62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 11:50:46 np0005601226 podman[90880]: 2026-01-29 16:50:46.593128245 +0000 UTC m=+0.130460341 container attach 62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 11:50:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3452065519' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3452065519' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 29 11:50:47 np0005601226 agitated_jemison[90913]: pool 'images' created
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 29 11:50:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:50:47 np0005601226 systemd[1]: libpod-62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52.scope: Deactivated successfully.
Jan 29 11:50:47 np0005601226 podman[90880]: 2026-01-29 16:50:47.168000901 +0000 UTC m=+0.705332907 container died 62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:47 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e140543d33a5eef0779468d60d2c5e18f23503bcfedc965aed112975d2356c43-merged.mount: Deactivated successfully.
Jan 29 11:50:47 np0005601226 podman[90880]: 2026-01-29 16:50:47.244712525 +0000 UTC m=+0.782044521 container remove 62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52 (image=quay.io/ceph/ceph:v20, name=agitated_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 11:50:47 np0005601226 systemd[1]: libpod-conmon-62126446a582888e7ffaacaa28e8c0c1f968cb048c33397fb366f3b243ea5e52.scope: Deactivated successfully.
Jan 29 11:50:47 np0005601226 podman[91064]: 2026-01-29 16:50:47.341016082 +0000 UTC m=+0.055333422 container create 2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:50:47 np0005601226 systemd[1]: Started libpod-conmon-2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3.scope.
Jan 29 11:50:47 np0005601226 podman[91064]: 2026-01-29 16:50:47.315606534 +0000 UTC m=+0.029923874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:47 np0005601226 podman[91064]: 2026-01-29 16:50:47.441566108 +0000 UTC m=+0.155883488 container init 2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:47 np0005601226 podman[91064]: 2026-01-29 16:50:47.449839351 +0000 UTC m=+0.164156651 container start 2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 11:50:47 np0005601226 funny_rhodes[91105]: 167 167
Jan 29 11:50:47 np0005601226 systemd[1]: libpod-2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3.scope: Deactivated successfully.
Jan 29 11:50:47 np0005601226 podman[91064]: 2026-01-29 16:50:47.456393506 +0000 UTC m=+0.170710846 container attach 2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:47 np0005601226 podman[91064]: 2026-01-29 16:50:47.45690398 +0000 UTC m=+0.171221320 container died 2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 29 11:50:47 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b5709290ca6bb850f25eb43dc0d540de9a5c203016187a51768742b69559bcf7-merged.mount: Deactivated successfully.
Jan 29 11:50:47 np0005601226 podman[91064]: 2026-01-29 16:50:47.502684512 +0000 UTC m=+0.217001822 container remove 2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 11:50:47 np0005601226 systemd[1]: libpod-conmon-2ab39218565af60abfbe197cc2babc78e2e8b1a0ff08f400198f28fffd139af3.scope: Deactivated successfully.
Jan 29 11:50:47 np0005601226 python3[91107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:47 np0005601226 podman[91127]: 2026-01-29 16:50:47.638320048 +0000 UTC m=+0.045354530 container create 97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a (image=quay.io/ceph/ceph:v20, name=gracious_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:47 np0005601226 systemd[1]: Started libpod-conmon-97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a.scope.
Jan 29 11:50:47 np0005601226 podman[91138]: 2026-01-29 16:50:47.684335416 +0000 UTC m=+0.067687381 container create b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:47 np0005601226 podman[91127]: 2026-01-29 16:50:47.617513091 +0000 UTC m=+0.024547603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905cd30e9320686227a96e842472481936d7bbf403e3024024bc76702c0e0aba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/905cd30e9320686227a96e842472481936d7bbf403e3024024bc76702c0e0aba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:47 np0005601226 systemd[1]: Started libpod-conmon-b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c.scope.
Jan 29 11:50:47 np0005601226 podman[91127]: 2026-01-29 16:50:47.739462151 +0000 UTC m=+0.146496673 container init 97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a (image=quay.io/ceph/ceph:v20, name=gracious_tharp, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:47 np0005601226 podman[91127]: 2026-01-29 16:50:47.748041573 +0000 UTC m=+0.155076045 container start 97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a (image=quay.io/ceph/ceph:v20, name=gracious_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 11:50:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:47 np0005601226 podman[91127]: 2026-01-29 16:50:47.753931979 +0000 UTC m=+0.160966461 container attach 97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a (image=quay.io/ceph/ceph:v20, name=gracious_tharp, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e12026d608d3faf8a5400becbfee33d04879e56b4b67fcb04747c762df41c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:47 np0005601226 podman[91138]: 2026-01-29 16:50:47.660966747 +0000 UTC m=+0.044318762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e12026d608d3faf8a5400becbfee33d04879e56b4b67fcb04747c762df41c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e12026d608d3faf8a5400becbfee33d04879e56b4b67fcb04747c762df41c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e12026d608d3faf8a5400becbfee33d04879e56b4b67fcb04747c762df41c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e12026d608d3faf8a5400becbfee33d04879e56b4b67fcb04747c762df41c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:47 np0005601226 podman[91138]: 2026-01-29 16:50:47.777669358 +0000 UTC m=+0.161021373 container init b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:47 np0005601226 podman[91138]: 2026-01-29 16:50:47.791238811 +0000 UTC m=+0.174590786 container start b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_euclid, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:50:47 np0005601226 podman[91138]: 2026-01-29 16:50:47.795367698 +0000 UTC m=+0.178719663 container attach b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_euclid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3452065519' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:47 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3452065519' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:48 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/250638509' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/250638509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 29 11:50:48 np0005601226 gracious_tharp[91161]: pool 'cephfs.cephfs.meta' created
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 29 11:50:48 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:50:48 np0005601226 systemd[1]: libpod-97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a.scope: Deactivated successfully.
Jan 29 11:50:48 np0005601226 focused_euclid[91166]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:50:48 np0005601226 focused_euclid[91166]: --> All data devices are unavailable
Jan 29 11:50:48 np0005601226 podman[91210]: 2026-01-29 16:50:48.253146901 +0000 UTC m=+0.030886472 container died 97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a (image=quay.io/ceph/ceph:v20, name=gracious_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 11:50:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-905cd30e9320686227a96e842472481936d7bbf403e3024024bc76702c0e0aba-merged.mount: Deactivated successfully.
Jan 29 11:50:48 np0005601226 systemd[1]: libpod-b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c.scope: Deactivated successfully.
Jan 29 11:50:48 np0005601226 podman[91138]: 2026-01-29 16:50:48.293431638 +0000 UTC m=+0.676783613 container died b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_euclid, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:48 np0005601226 podman[91210]: 2026-01-29 16:50:48.307212946 +0000 UTC m=+0.084952487 container remove 97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a (image=quay.io/ceph/ceph:v20, name=gracious_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:48 np0005601226 systemd[1]: libpod-conmon-97b500d250e318cd27effaf8fc645bf6d4c429c3ac873fc825d927def4c6332a.scope: Deactivated successfully.
Jan 29 11:50:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v48: 6 pgs: 4 unknown, 2 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Jan 29 11:50:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e7e12026d608d3faf8a5400becbfee33d04879e56b4b67fcb04747c762df41c5-merged.mount: Deactivated successfully.
Jan 29 11:50:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:50:48 np0005601226 podman[91138]: 2026-01-29 16:50:48.366079867 +0000 UTC m=+0.749431842 container remove b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 29 11:50:48 np0005601226 systemd[1]: libpod-conmon-b70363895cb3d93dceb369d25d4bb6a077b51afff15dc393e3f29f97fb0d124c.scope: Deactivated successfully.
Jan 29 11:50:48 np0005601226 python3[91284]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:48 np0005601226 podman[91311]: 2026-01-29 16:50:48.644331935 +0000 UTC m=+0.041912823 container create be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8 (image=quay.io/ceph/ceph:v20, name=frosty_villani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 11:50:48 np0005601226 systemd[1]: Started libpod-conmon-be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8.scope.
Jan 29 11:50:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99ff101549823c3a72b1229f5309ee0aceb82e7f0659f999eeed26e5be35712/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b99ff101549823c3a72b1229f5309ee0aceb82e7f0659f999eeed26e5be35712/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:48 np0005601226 podman[91311]: 2026-01-29 16:50:48.625943157 +0000 UTC m=+0.023524095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:48 np0005601226 podman[91311]: 2026-01-29 16:50:48.728735367 +0000 UTC m=+0.126316275 container init be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8 (image=quay.io/ceph/ceph:v20, name=frosty_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 11:50:48 np0005601226 podman[91311]: 2026-01-29 16:50:48.733075969 +0000 UTC m=+0.130656857 container start be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8 (image=quay.io/ceph/ceph:v20, name=frosty_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 11:50:48 np0005601226 podman[91311]: 2026-01-29 16:50:48.739655215 +0000 UTC m=+0.137236123 container attach be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8 (image=quay.io/ceph/ceph:v20, name=frosty_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 11:50:48 np0005601226 podman[91342]: 2026-01-29 16:50:48.766122251 +0000 UTC m=+0.045497194 container create ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_newton, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:48 np0005601226 systemd[1]: Started libpod-conmon-ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8.scope.
Jan 29 11:50:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:48 np0005601226 podman[91342]: 2026-01-29 16:50:48.748319359 +0000 UTC m=+0.027694392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:48 np0005601226 podman[91342]: 2026-01-29 16:50:48.915932536 +0000 UTC m=+0.195307499 container init ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 11:50:48 np0005601226 podman[91342]: 2026-01-29 16:50:48.922502232 +0000 UTC m=+0.201877205 container start ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_newton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 11:50:48 np0005601226 quirky_newton[91361]: 167 167
Jan 29 11:50:48 np0005601226 systemd[1]: libpod-ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8.scope: Deactivated successfully.
Jan 29 11:50:48 np0005601226 conmon[91361]: conmon ca0594bb3c0d415ec70f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8.scope/container/memory.events
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/250638509' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:48 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/250638509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:48 np0005601226 podman[91342]: 2026-01-29 16:50:48.978914853 +0000 UTC m=+0.258289876 container attach ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_newton, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 11:50:48 np0005601226 podman[91342]: 2026-01-29 16:50:48.997650242 +0000 UTC m=+0.277025225 container died ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 29 11:50:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 29 11:50:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 29 11:50:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4200190651' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-09837223a0df630e9544bae709aba2e292991b023f3ab0400b204fa3de4a75d8-merged.mount: Deactivated successfully.
Jan 29 11:50:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 29 11:50:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 29 11:50:49 np0005601226 podman[91385]: 2026-01-29 16:50:49.663501354 +0000 UTC m=+0.712588942 container remove ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_newton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 11:50:49 np0005601226 systemd[1]: libpod-conmon-ca0594bb3c0d415ec70fc62258c7fecfd072c92bb68e9381e3ad2a9f60970dc8.scope: Deactivated successfully.
Jan 29 11:50:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:50:49 np0005601226 podman[91409]: 2026-01-29 16:50:49.782503601 +0000 UTC m=+0.042729406 container create 27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 29 11:50:49 np0005601226 systemd[1]: Started libpod-conmon-27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8.scope.
Jan 29 11:50:49 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520f299215216f5b96c345bd9802f0dc959b4ceab806f51317c126261089dfce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520f299215216f5b96c345bd9802f0dc959b4ceab806f51317c126261089dfce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520f299215216f5b96c345bd9802f0dc959b4ceab806f51317c126261089dfce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/520f299215216f5b96c345bd9802f0dc959b4ceab806f51317c126261089dfce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:49 np0005601226 podman[91409]: 2026-01-29 16:50:49.760413098 +0000 UTC m=+0.020638883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:49 np0005601226 podman[91409]: 2026-01-29 16:50:49.881286478 +0000 UTC m=+0.141512263 container init 27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:49 np0005601226 podman[91409]: 2026-01-29 16:50:49.892169705 +0000 UTC m=+0.152395470 container start 27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:49 np0005601226 podman[91409]: 2026-01-29 16:50:49.898489433 +0000 UTC m=+0.158715198 container attach 27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 11:50:49 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/4200190651' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 29 11:50:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:50 np0005601226 confident_goodall[91426]: {
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:    "0": [
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:        {
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "devices": [
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "/dev/loop3"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            ],
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_name": "ceph_lv0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_size": "21470642176",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "name": "ceph_lv0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "tags": {
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.crush_device_class": "",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.encrypted": "0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osd_id": "0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.type": "block",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.vdo": "0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.with_tpm": "0"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            },
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "type": "block",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "vg_name": "ceph_vg0"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:        }
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:    ],
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:    "1": [
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:        {
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "devices": [
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "/dev/loop4"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            ],
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_name": "ceph_lv1",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_size": "21470642176",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "name": "ceph_lv1",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "tags": {
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.crush_device_class": "",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.encrypted": "0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osd_id": "1",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.type": "block",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.vdo": "0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.with_tpm": "0"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            },
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "type": "block",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "vg_name": "ceph_vg1"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:        }
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:    ],
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:    "2": [
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:        {
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "devices": [
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "/dev/loop5"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            ],
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_name": "ceph_lv2",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_size": "21470642176",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "name": "ceph_lv2",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "tags": {
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.cluster_name": "ceph",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.crush_device_class": "",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.encrypted": "0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.objectstore": "bluestore",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osd_id": "2",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.type": "block",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.vdo": "0",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:                "ceph.with_tpm": "0"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            },
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "type": "block",
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:            "vg_name": "ceph_vg2"
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:        }
Jan 29 11:50:50 np0005601226 confident_goodall[91426]:    ]
Jan 29 11:50:50 np0005601226 confident_goodall[91426]: }
Jan 29 11:50:50 np0005601226 systemd[1]: libpod-27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8.scope: Deactivated successfully.
Jan 29 11:50:50 np0005601226 podman[91409]: 2026-01-29 16:50:50.19973848 +0000 UTC m=+0.459964275 container died 27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 29 11:50:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-520f299215216f5b96c345bd9802f0dc959b4ceab806f51317c126261089dfce-merged.mount: Deactivated successfully.
Jan 29 11:50:50 np0005601226 podman[91409]: 2026-01-29 16:50:50.247566629 +0000 UTC m=+0.507792404 container remove 27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:50:50 np0005601226 systemd[1]: libpod-conmon-27560f9ff6af46a2b94340af9e1c2ceec6522ffe962aa9279385d61539b991c8.scope: Deactivated successfully.
Jan 29 11:50:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v50: 6 pgs: 1 creating+peering, 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:50:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 29 11:50:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4200190651' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 29 11:50:50 np0005601226 frosty_villani[91328]: pool 'cephfs.cephfs.data' created
Jan 29 11:50:50 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 29 11:50:50 np0005601226 systemd[1]: libpod-be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8.scope: Deactivated successfully.
Jan 29 11:50:50 np0005601226 conmon[91328]: conmon be802a4f157dc8b83999 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8.scope/container/memory.events
Jan 29 11:50:50 np0005601226 podman[91311]: 2026-01-29 16:50:50.548411875 +0000 UTC m=+1.945992763 container died be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8 (image=quay.io/ceph/ceph:v20, name=frosty_villani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b99ff101549823c3a72b1229f5309ee0aceb82e7f0659f999eeed26e5be35712-merged.mount: Deactivated successfully.
Jan 29 11:50:50 np0005601226 podman[91311]: 2026-01-29 16:50:50.59040619 +0000 UTC m=+1.987987088 container remove be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8 (image=quay.io/ceph/ceph:v20, name=frosty_villani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:50 np0005601226 systemd[1]: libpod-conmon-be802a4f157dc8b83999842962f91032bcad0d91937593a65a951b14bfda58a8.scope: Deactivated successfully.
Jan 29 11:50:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:50:50 np0005601226 podman[91521]: 2026-01-29 16:50:50.743808318 +0000 UTC m=+0.061575888 container create 6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:50:50 np0005601226 podman[91521]: 2026-01-29 16:50:50.702874473 +0000 UTC m=+0.020642133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:50 np0005601226 systemd[1]: Started libpod-conmon-6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907.scope.
Jan 29 11:50:50 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:50 np0005601226 podman[91521]: 2026-01-29 16:50:50.860994733 +0000 UTC m=+0.178762353 container init 6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:50:50 np0005601226 podman[91521]: 2026-01-29 16:50:50.86796708 +0000 UTC m=+0.185734650 container start 6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 11:50:50 np0005601226 gifted_jepsen[91562]: 167 167
Jan 29 11:50:50 np0005601226 systemd[1]: libpod-6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907.scope: Deactivated successfully.
Jan 29 11:50:50 np0005601226 podman[91521]: 2026-01-29 16:50:50.878498737 +0000 UTC m=+0.196266337 container attach 6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:50 np0005601226 podman[91521]: 2026-01-29 16:50:50.879349621 +0000 UTC m=+0.197117201 container died 6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:50 np0005601226 python3[91564]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-52304428148bbeaa2315b86fbbb4294a094c2463c9219138200331b3435646be-merged.mount: Deactivated successfully.
Jan 29 11:50:51 np0005601226 podman[91521]: 2026-01-29 16:50:51.039681953 +0000 UTC m=+0.357449523 container remove 6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 11:50:51 np0005601226 systemd[1]: libpod-conmon-6a804b07889de6e368f01c9c5b5dcffdbf5469945a19a63d8e16260683bbb907.scope: Deactivated successfully.
Jan 29 11:50:51 np0005601226 podman[91580]: 2026-01-29 16:50:51.101269991 +0000 UTC m=+0.141379580 container create cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65 (image=quay.io/ceph/ceph:v20, name=frosty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:51 np0005601226 systemd[1]: Started libpod-conmon-cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65.scope.
Jan 29 11:50:51 np0005601226 podman[91580]: 2026-01-29 16:50:51.064817792 +0000 UTC m=+0.104927401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a65c73e4133f78fed60c5f9c7a4c5f9ff7255cacb95c21ae4b33e415458e53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04a65c73e4133f78fed60c5f9c7a4c5f9ff7255cacb95c21ae4b33e415458e53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:51 np0005601226 podman[91603]: 2026-01-29 16:50:51.194801729 +0000 UTC m=+0.065158719 container create 2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 11:50:51 np0005601226 podman[91580]: 2026-01-29 16:50:51.222422448 +0000 UTC m=+0.262532088 container init cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65 (image=quay.io/ceph/ceph:v20, name=frosty_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:50:51 np0005601226 podman[91580]: 2026-01-29 16:50:51.231542265 +0000 UTC m=+0.271651824 container start cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65 (image=quay.io/ceph/ceph:v20, name=frosty_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:50:51 np0005601226 podman[91580]: 2026-01-29 16:50:51.248354279 +0000 UTC m=+0.288463858 container attach cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65 (image=quay.io/ceph/ceph:v20, name=frosty_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:50:51 np0005601226 podman[91603]: 2026-01-29 16:50:51.16045238 +0000 UTC m=+0.030809450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:50:51 np0005601226 systemd[1]: Started libpod-conmon-2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450.scope.
Jan 29 11:50:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d85d266904ba44d10610944c496d5055687e8474ef20f2336f156b5bd09a78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d85d266904ba44d10610944c496d5055687e8474ef20f2336f156b5bd09a78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d85d266904ba44d10610944c496d5055687e8474ef20f2336f156b5bd09a78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d85d266904ba44d10610944c496d5055687e8474ef20f2336f156b5bd09a78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:51 np0005601226 podman[91603]: 2026-01-29 16:50:51.380417335 +0000 UTC m=+0.250774325 container init 2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:50:51 np0005601226 podman[91603]: 2026-01-29 16:50:51.384976274 +0000 UTC m=+0.255333254 container start 2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:50:51 np0005601226 podman[91603]: 2026-01-29 16:50:51.43164718 +0000 UTC m=+0.302004170 container attach 2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_nobel, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 29 11:50:51 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/4200190651' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 29 11:50:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 29 11:50:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 29 11:50:51 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:50:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 29 11:50:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2255880742' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 29 11:50:52 np0005601226 lvm[91724]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:50:52 np0005601226 lvm[91727]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:50:52 np0005601226 lvm[91724]: VG ceph_vg0 finished
Jan 29 11:50:52 np0005601226 lvm[91727]: VG ceph_vg1 finished
Jan 29 11:50:52 np0005601226 lvm[91729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:50:52 np0005601226 lvm[91729]: VG ceph_vg2 finished
Jan 29 11:50:52 np0005601226 thirsty_nobel[91626]: {}
Jan 29 11:50:52 np0005601226 systemd[1]: libpod-2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450.scope: Deactivated successfully.
Jan 29 11:50:52 np0005601226 systemd[1]: libpod-2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450.scope: Consumed 1.175s CPU time.
Jan 29 11:50:52 np0005601226 podman[91603]: 2026-01-29 16:50:52.255114168 +0000 UTC m=+1.125471158 container died 2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 11:50:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a9d85d266904ba44d10610944c496d5055687e8474ef20f2336f156b5bd09a78-merged.mount: Deactivated successfully.
Jan 29 11:50:52 np0005601226 podman[91603]: 2026-01-29 16:50:52.301137527 +0000 UTC m=+1.171494517 container remove 2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:52 np0005601226 systemd[1]: libpod-conmon-2b9f09d3c3f985cc23a8e36fd38502a1a2c60f090d93a2e91a7544a3239e6450.scope: Deactivated successfully.
Jan 29 11:50:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 1 creating+peering, 2 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2255880742' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/2255880742' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:50:52 np0005601226 frosty_cori[91615]: enabled application 'rbd' on pool 'vms'
Jan 29 11:50:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 29 11:50:52 np0005601226 podman[91580]: 2026-01-29 16:50:52.590650453 +0000 UTC m=+1.630760042 container died cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65 (image=quay.io/ceph/ceph:v20, name=frosty_cori, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:50:52 np0005601226 systemd[1]: libpod-cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65.scope: Deactivated successfully.
Jan 29 11:50:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-04a65c73e4133f78fed60c5f9c7a4c5f9ff7255cacb95c21ae4b33e415458e53-merged.mount: Deactivated successfully.
Jan 29 11:50:52 np0005601226 podman[91580]: 2026-01-29 16:50:52.784389488 +0000 UTC m=+1.824499067 container remove cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65 (image=quay.io/ceph/ceph:v20, name=frosty_cori, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:50:52 np0005601226 systemd[1]: libpod-conmon-cfeb893708d88b63054ba575cbc97eaa4d011ff61abe1d2c7ecb81fbb18f1e65.scope: Deactivated successfully.
Jan 29 11:50:53 np0005601226 python3[91807]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:53 np0005601226 podman[91808]: 2026-01-29 16:50:53.147281564 +0000 UTC m=+0.055011553 container create 3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f (image=quay.io/ceph/ceph:v20, name=determined_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:53 np0005601226 systemd[1]: Started libpod-conmon-3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f.scope.
Jan 29 11:50:53 np0005601226 podman[91808]: 2026-01-29 16:50:53.129503443 +0000 UTC m=+0.037233462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e527706b8975c29bbd1bcce2e53f768066d8a5cd75a5bfdda172a0616c4bdb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e527706b8975c29bbd1bcce2e53f768066d8a5cd75a5bfdda172a0616c4bdb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:53 np0005601226 podman[91808]: 2026-01-29 16:50:53.254888269 +0000 UTC m=+0.162618348 container init 3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f (image=quay.io/ceph/ceph:v20, name=determined_goldwasser, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 11:50:53 np0005601226 podman[91808]: 2026-01-29 16:50:53.261823415 +0000 UTC m=+0.169553414 container start 3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f (image=quay.io/ceph/ceph:v20, name=determined_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 11:50:53 np0005601226 podman[91808]: 2026-01-29 16:50:53.266351574 +0000 UTC m=+0.174081613 container attach 3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f (image=quay.io/ceph/ceph:v20, name=determined_goldwasser, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:53 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/2255880742' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 29 11:50:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 29 11:50:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1154272223' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 29 11:50:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:50:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 29 11:50:54 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1154272223' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 29 11:50:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1154272223' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 29 11:50:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 29 11:50:54 np0005601226 determined_goldwasser[91824]: enabled application 'rbd' on pool 'volumes'
Jan 29 11:50:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 29 11:50:54 np0005601226 systemd[1]: libpod-3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f.scope: Deactivated successfully.
Jan 29 11:50:54 np0005601226 conmon[91824]: conmon 3268a2dfcb2b96837a51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f.scope/container/memory.events
Jan 29 11:50:54 np0005601226 podman[91808]: 2026-01-29 16:50:54.869560727 +0000 UTC m=+1.777290726 container died 3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f (image=quay.io/ceph/ceph:v20, name=determined_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 11:50:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:50:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c7e527706b8975c29bbd1bcce2e53f768066d8a5cd75a5bfdda172a0616c4bdb-merged.mount: Deactivated successfully.
Jan 29 11:50:55 np0005601226 podman[91808]: 2026-01-29 16:50:55.451403279 +0000 UTC m=+2.359133278 container remove 3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f (image=quay.io/ceph/ceph:v20, name=determined_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:55 np0005601226 systemd[1]: libpod-conmon-3268a2dfcb2b96837a514b7f0a7b8677cb4d945304c14ebe1790bbe857d95c1f.scope: Deactivated successfully.
Jan 29 11:50:55 np0005601226 python3[91888]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:55 np0005601226 podman[91889]: 2026-01-29 16:50:55.777749495 +0000 UTC m=+0.060000204 container create 82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af (image=quay.io/ceph/ceph:v20, name=amazing_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:50:55 np0005601226 systemd[1]: Started libpod-conmon-82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af.scope.
Jan 29 11:50:55 np0005601226 podman[91889]: 2026-01-29 16:50:55.751535035 +0000 UTC m=+0.033785784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:55 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:55 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1154272223' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 29 11:50:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d039a9af1bc21a4403ebd244a4df43fc40846cca6af681bc7e3ffd433c9b30b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d039a9af1bc21a4403ebd244a4df43fc40846cca6af681bc7e3ffd433c9b30b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:55 np0005601226 podman[91889]: 2026-01-29 16:50:55.870439789 +0000 UTC m=+0.152690548 container init 82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af (image=quay.io/ceph/ceph:v20, name=amazing_heisenberg, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 11:50:55 np0005601226 podman[91889]: 2026-01-29 16:50:55.87790601 +0000 UTC m=+0.160156709 container start 82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af (image=quay.io/ceph/ceph:v20, name=amazing_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 11:50:55 np0005601226 podman[91889]: 2026-01-29 16:50:55.883028444 +0000 UTC m=+0.165279203 container attach 82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af (image=quay.io/ceph/ceph:v20, name=amazing_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 11:50:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:50:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 29 11:50:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3409911387' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 29 11:50:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 29 11:50:56 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3409911387' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 29 11:50:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3409911387' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 29 11:50:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 29 11:50:56 np0005601226 amazing_heisenberg[91904]: enabled application 'rbd' on pool 'backups'
Jan 29 11:50:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 29 11:50:56 np0005601226 systemd[1]: libpod-82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af.scope: Deactivated successfully.
Jan 29 11:50:56 np0005601226 podman[91889]: 2026-01-29 16:50:56.899151667 +0000 UTC m=+1.181402356 container died 82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af (image=quay.io/ceph/ceph:v20, name=amazing_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 11:50:56 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1d039a9af1bc21a4403ebd244a4df43fc40846cca6af681bc7e3ffd433c9b30b-merged.mount: Deactivated successfully.
Jan 29 11:50:56 np0005601226 podman[91889]: 2026-01-29 16:50:56.951424351 +0000 UTC m=+1.233675020 container remove 82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af (image=quay.io/ceph/ceph:v20, name=amazing_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 11:50:56 np0005601226 systemd[1]: libpod-conmon-82f0a54abf2c223f955ca31a13fbc4e0e0bdd254f3bf8e4901b9b2317dd256af.scope: Deactivated successfully.
Jan 29 11:50:57 np0005601226 python3[91968]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:57 np0005601226 podman[91969]: 2026-01-29 16:50:57.351831316 +0000 UTC m=+0.060297882 container create 6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a (image=quay.io/ceph/ceph:v20, name=quirky_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:57 np0005601226 podman[91969]: 2026-01-29 16:50:57.312773065 +0000 UTC m=+0.021239601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:57 np0005601226 systemd[1]: Started libpod-conmon-6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a.scope.
Jan 29 11:50:57 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba65e6cbb8572e929b95c9f39d937a3526d0236b280c595a86c96fbaf16d56b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ba65e6cbb8572e929b95c9f39d937a3526d0236b280c595a86c96fbaf16d56b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:57 np0005601226 podman[91969]: 2026-01-29 16:50:57.472623474 +0000 UTC m=+0.181090080 container init 6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a (image=quay.io/ceph/ceph:v20, name=quirky_albattani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:50:57 np0005601226 podman[91969]: 2026-01-29 16:50:57.480456645 +0000 UTC m=+0.188923201 container start 6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a (image=quay.io/ceph/ceph:v20, name=quirky_albattani, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:50:57 np0005601226 podman[91969]: 2026-01-29 16:50:57.485130697 +0000 UTC m=+0.193597393 container attach 6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a (image=quay.io/ceph/ceph:v20, name=quirky_albattani, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 11:50:57 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3409911387' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 29 11:50:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 29 11:50:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672445364' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 29 11:50:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:50:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 29 11:50:58 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/672445364' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 29 11:50:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672445364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 29 11:50:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 29 11:50:58 np0005601226 quirky_albattani[91984]: enabled application 'rbd' on pool 'images'
Jan 29 11:50:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 29 11:50:58 np0005601226 systemd[1]: libpod-6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a.scope: Deactivated successfully.
Jan 29 11:50:58 np0005601226 podman[92009]: 2026-01-29 16:50:58.977113482 +0000 UTC m=+0.043626811 container died 6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a (image=quay.io/ceph/ceph:v20, name=quirky_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 11:50:59 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4ba65e6cbb8572e929b95c9f39d937a3526d0236b280c595a86c96fbaf16d56b-merged.mount: Deactivated successfully.
Jan 29 11:50:59 np0005601226 podman[92009]: 2026-01-29 16:50:59.032119635 +0000 UTC m=+0.098632994 container remove 6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a (image=quay.io/ceph/ceph:v20, name=quirky_albattani, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:50:59 np0005601226 systemd[1]: libpod-conmon-6726c2f0b2cfa78e3474785eed29c1f3b2844e65c5fd89137544e7dde7cc808a.scope: Deactivated successfully.
Jan 29 11:50:59 np0005601226 python3[92049]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:50:59 np0005601226 podman[92050]: 2026-01-29 16:50:59.459733897 +0000 UTC m=+0.077035214 container create d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a (image=quay.io/ceph/ceph:v20, name=wizardly_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:59 np0005601226 podman[92050]: 2026-01-29 16:50:59.414437889 +0000 UTC m=+0.031739246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:50:59 np0005601226 systemd[1]: Started libpod-conmon-d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a.scope.
Jan 29 11:50:59 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:50:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd94cbd86d8c3aa9e039b8be7ef0e493fb88b1ace8d4a3a044ee793ecea3e97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfd94cbd86d8c3aa9e039b8be7ef0e493fb88b1ace8d4a3a044ee793ecea3e97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:50:59 np0005601226 podman[92050]: 2026-01-29 16:50:59.669777342 +0000 UTC m=+0.287078619 container init d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a (image=quay.io/ceph/ceph:v20, name=wizardly_aryabhata, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 11:50:59 np0005601226 podman[92050]: 2026-01-29 16:50:59.674727731 +0000 UTC m=+0.292029008 container start d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a (image=quay.io/ceph/ceph:v20, name=wizardly_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 29 11:50:59 np0005601226 podman[92050]: 2026-01-29 16:50:59.797758851 +0000 UTC m=+0.415060168 container attach d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a (image=quay.io/ceph/ceph:v20, name=wizardly_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:50:59 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/672445364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 29 11:50:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 29 11:51:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/536836083' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 29 11:51:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 29 11:51:00 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/536836083' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 29 11:51:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/536836083' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 29 11:51:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 29 11:51:00 np0005601226 wizardly_aryabhata[92066]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 29 11:51:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 29 11:51:00 np0005601226 systemd[1]: libpod-d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a.scope: Deactivated successfully.
Jan 29 11:51:00 np0005601226 podman[92050]: 2026-01-29 16:51:00.964981707 +0000 UTC m=+1.582283014 container died d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a (image=quay.io/ceph/ceph:v20, name=wizardly_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:51:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cfd94cbd86d8c3aa9e039b8be7ef0e493fb88b1ace8d4a3a044ee793ecea3e97-merged.mount: Deactivated successfully.
Jan 29 11:51:01 np0005601226 podman[92050]: 2026-01-29 16:51:01.015334827 +0000 UTC m=+1.632636134 container remove d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a (image=quay.io/ceph/ceph:v20, name=wizardly_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:01 np0005601226 systemd[1]: libpod-conmon-d1795124c049f5db5e4117d84777d8b2cb8dd8a6f2465f35b65ad2da0408e99a.scope: Deactivated successfully.
Jan 29 11:51:01 np0005601226 python3[92126]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:01 np0005601226 podman[92127]: 2026-01-29 16:51:01.44865637 +0000 UTC m=+0.105017863 container create c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730 (image=quay.io/ceph/ceph:v20, name=vigilant_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:01 np0005601226 podman[92127]: 2026-01-29 16:51:01.376911037 +0000 UTC m=+0.033272610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:01 np0005601226 systemd[1]: Started libpod-conmon-c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730.scope.
Jan 29 11:51:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06bb14b805dae3218bf83d0729b79127efe8afcdfcdb375b3fe5541bae1c78b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a06bb14b805dae3218bf83d0729b79127efe8afcdfcdb375b3fe5541bae1c78b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:01 np0005601226 podman[92127]: 2026-01-29 16:51:01.608318354 +0000 UTC m=+0.264679877 container init c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730 (image=quay.io/ceph/ceph:v20, name=vigilant_gould, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:01 np0005601226 podman[92127]: 2026-01-29 16:51:01.615546548 +0000 UTC m=+0.271908062 container start c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730 (image=quay.io/ceph/ceph:v20, name=vigilant_gould, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 29 11:51:01 np0005601226 podman[92127]: 2026-01-29 16:51:01.660657291 +0000 UTC m=+0.317018804 container attach c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730 (image=quay.io/ceph/ceph:v20, name=vigilant_gould, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:01 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/536836083' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 29 11:51:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 29 11:51:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2568602746' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 29 11:51:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 29 11:51:02 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/2568602746' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 29 11:51:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2568602746' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 29 11:51:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 29 11:51:02 np0005601226 vigilant_gould[92142]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 29 11:51:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 29 11:51:02 np0005601226 systemd[1]: libpod-c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730.scope: Deactivated successfully.
Jan 29 11:51:02 np0005601226 podman[92127]: 2026-01-29 16:51:02.97372021 +0000 UTC m=+1.630081693 container died c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730 (image=quay.io/ceph/ceph:v20, name=vigilant_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a06bb14b805dae3218bf83d0729b79127efe8afcdfcdb375b3fe5541bae1c78b-merged.mount: Deactivated successfully.
Jan 29 11:51:03 np0005601226 podman[92127]: 2026-01-29 16:51:03.011088663 +0000 UTC m=+1.667450146 container remove c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730 (image=quay.io/ceph/ceph:v20, name=vigilant_gould, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 11:51:03 np0005601226 systemd[1]: libpod-conmon-c7f0cd934536e0902ba66ba418cefe0763447929502be977f9a7be9ed946f730.scope: Deactivated successfully.
Jan 29 11:51:03 np0005601226 python3[92254]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:51:03 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/2568602746' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 29 11:51:04 np0005601226 python3[92325]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705463.6274076-36556-211526690977134/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:51:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:04 np0005601226 python3[92427]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:51:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:05 np0005601226 python3[92502]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705464.4333832-36570-259753177787617/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=5dd01ceee511aedceb5046f3bac7a620b0d24717 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:51:05 np0005601226 python3[92552]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:05 np0005601226 podman[92553]: 2026-01-29 16:51:05.511813424 +0000 UTC m=+0.041010538 container create f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165 (image=quay.io/ceph/ceph:v20, name=nervous_nightingale, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:05 np0005601226 systemd[1]: Started libpod-conmon-f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165.scope.
Jan 29 11:51:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc2cc1d4ea06ad2715c42f6e460609a09de19e0617617674f59c3a778711dae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc2cc1d4ea06ad2715c42f6e460609a09de19e0617617674f59c3a778711dae/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc2cc1d4ea06ad2715c42f6e460609a09de19e0617617674f59c3a778711dae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:05 np0005601226 podman[92553]: 2026-01-29 16:51:05.587649344 +0000 UTC m=+0.116846438 container init f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165 (image=quay.io/ceph/ceph:v20, name=nervous_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 11:51:05 np0005601226 podman[92553]: 2026-01-29 16:51:05.492253082 +0000 UTC m=+0.021450196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:05 np0005601226 podman[92553]: 2026-01-29 16:51:05.591531742 +0000 UTC m=+0.120728836 container start f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165 (image=quay.io/ceph/ceph:v20, name=nervous_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:05 np0005601226 podman[92553]: 2026-01-29 16:51:05.59676419 +0000 UTC m=+0.125961274 container attach f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165 (image=quay.io/ceph/ceph:v20, name=nervous_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 11:51:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 29 11:51:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/84605313' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 11:51:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/84605313' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 29 11:51:05 np0005601226 nervous_nightingale[92568]: 
Jan 29 11:51:05 np0005601226 nervous_nightingale[92568]: [global]
Jan 29 11:51:05 np0005601226 nervous_nightingale[92568]: #011fsid = cc5c72e3-31e0-58b9-8731-456117d38f4a
Jan 29 11:51:05 np0005601226 nervous_nightingale[92568]: #011mon_host = 192.168.122.100
Jan 29 11:51:05 np0005601226 nervous_nightingale[92568]: #011rgw_keystone_api_version = 3
Jan 29 11:51:05 np0005601226 systemd[1]: libpod-f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165.scope: Deactivated successfully.
Jan 29 11:51:05 np0005601226 podman[92553]: 2026-01-29 16:51:05.97501298 +0000 UTC m=+0.504210064 container died f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165 (image=quay.io/ceph/ceph:v20, name=nervous_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:05 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/84605313' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 29 11:51:05 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/84605313' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 29 11:51:05 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2bc2cc1d4ea06ad2715c42f6e460609a09de19e0617617674f59c3a778711dae-merged.mount: Deactivated successfully.
Jan 29 11:51:06 np0005601226 podman[92553]: 2026-01-29 16:51:06.015664617 +0000 UTC m=+0.544861701 container remove f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165 (image=quay.io/ceph/ceph:v20, name=nervous_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:51:06 np0005601226 systemd[1]: libpod-conmon-f2d91c2540575e582bfb35419a38a413ca969539599bc0bed4b68b40bf8ca165.scope: Deactivated successfully.
Jan 29 11:51:06 np0005601226 python3[92680]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:06 np0005601226 podman[92713]: 2026-01-29 16:51:06.376638419 +0000 UTC m=+0.040584346 container create dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071 (image=quay.io/ceph/ceph:v20, name=lucid_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 11:51:06 np0005601226 systemd[1]: Started libpod-conmon-dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071.scope.
Jan 29 11:51:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:06 np0005601226 podman[92734]: 2026-01-29 16:51:06.439451541 +0000 UTC m=+0.070493079 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 11:51:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c657f86fe721c01b29939f026c477de3f409545f2525e5994a2e50b66b56167/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c657f86fe721c01b29939f026c477de3f409545f2525e5994a2e50b66b56167/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c657f86fe721c01b29939f026c477de3f409545f2525e5994a2e50b66b56167/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:06 np0005601226 podman[92713]: 2026-01-29 16:51:06.356504261 +0000 UTC m=+0.020450218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:06 np0005601226 podman[92713]: 2026-01-29 16:51:06.462319486 +0000 UTC m=+0.126265413 container init dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071 (image=quay.io/ceph/ceph:v20, name=lucid_elgamal, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:06 np0005601226 podman[92713]: 2026-01-29 16:51:06.466623857 +0000 UTC m=+0.130569764 container start dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071 (image=quay.io/ceph/ceph:v20, name=lucid_elgamal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 11:51:06 np0005601226 podman[92713]: 2026-01-29 16:51:06.470960969 +0000 UTC m=+0.134906906 container attach dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071 (image=quay.io/ceph/ceph:v20, name=lucid_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:06 np0005601226 podman[92734]: 2026-01-29 16:51:06.518520041 +0000 UTC m=+0.149561549 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 29 11:51:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1030561180' entity='client.admin' 
Jan 29 11:51:06 np0005601226 lucid_elgamal[92753]: set ssl_option
Jan 29 11:51:06 np0005601226 systemd[1]: libpod-dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071.scope: Deactivated successfully.
Jan 29 11:51:06 np0005601226 podman[92713]: 2026-01-29 16:51:06.994493637 +0000 UTC m=+0.658439544 container died dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071 (image=quay.io/ceph/ceph:v20, name=lucid_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:51:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-5c657f86fe721c01b29939f026c477de3f409545f2525e5994a2e50b66b56167-merged.mount: Deactivated successfully.
Jan 29 11:51:07 np0005601226 podman[92713]: 2026-01-29 16:51:07.042704968 +0000 UTC m=+0.706650875 container remove dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071 (image=quay.io/ceph/ceph:v20, name=lucid_elgamal, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:07 np0005601226 systemd[1]: libpod-conmon-dad04c8546f8cfa16756e8d882c07ff644fad19ff73e359782401af826c23071.scope: Deactivated successfully.
Jan 29 11:51:07 np0005601226 python3[93010]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:07 np0005601226 podman[93024]: 2026-01-29 16:51:07.408467695 +0000 UTC m=+0.043779066 container create 9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b (image=quay.io/ceph/ceph:v20, name=suspicious_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:07 np0005601226 systemd[1]: Started libpod-conmon-9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b.scope.
Jan 29 11:51:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea4d7531b5df3567ce11adc39191756cadccc11b19eb7fd2c137984a1c3bb58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea4d7531b5df3567ce11adc39191756cadccc11b19eb7fd2c137984a1c3bb58/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea4d7531b5df3567ce11adc39191756cadccc11b19eb7fd2c137984a1c3bb58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:07 np0005601226 podman[93024]: 2026-01-29 16:51:07.474774765 +0000 UTC m=+0.110086156 container init 9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b (image=quay.io/ceph/ceph:v20, name=suspicious_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:51:07 np0005601226 podman[93024]: 2026-01-29 16:51:07.479480988 +0000 UTC m=+0.114792329 container start 9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b (image=quay.io/ceph/ceph:v20, name=suspicious_snyder, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:07 np0005601226 podman[93024]: 2026-01-29 16:51:07.388300657 +0000 UTC m=+0.023611998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:07 np0005601226 podman[93024]: 2026-01-29 16:51:07.483391168 +0000 UTC m=+0.118702529 container attach 9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b (image=quay.io/ceph/ceph:v20, name=suspicious_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:07 np0005601226 podman[93129]: 2026-01-29 16:51:07.863037248 +0000 UTC m=+0.057933696 container create f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:07 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:51:07 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 29 11:51:07 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:07 np0005601226 suspicious_snyder[93045]: Scheduled rgw.rgw update...
Jan 29 11:51:07 np0005601226 podman[93024]: 2026-01-29 16:51:07.89467931 +0000 UTC m=+0.529990671 container died 9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b (image=quay.io/ceph/ceph:v20, name=suspicious_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 11:51:07 np0005601226 systemd[1]: Started libpod-conmon-f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89.scope.
Jan 29 11:51:07 np0005601226 systemd[1]: libpod-9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b.scope: Deactivated successfully.
Jan 29 11:51:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1030561180' entity='client.admin' 
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:07 np0005601226 podman[93129]: 2026-01-29 16:51:07.921850937 +0000 UTC m=+0.116747295 container init f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:07 np0005601226 podman[93129]: 2026-01-29 16:51:07.926176509 +0000 UTC m=+0.121072837 container start f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 11:51:07 np0005601226 podman[93129]: 2026-01-29 16:51:07.832971399 +0000 UTC m=+0.027868007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:07 np0005601226 cool_buck[93147]: 167 167
Jan 29 11:51:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4ea4d7531b5df3567ce11adc39191756cadccc11b19eb7fd2c137984a1c3bb58-merged.mount: Deactivated successfully.
Jan 29 11:51:07 np0005601226 systemd[1]: libpod-f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89.scope: Deactivated successfully.
Jan 29 11:51:07 np0005601226 podman[93024]: 2026-01-29 16:51:07.958541152 +0000 UTC m=+0.593852493 container remove 9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b (image=quay.io/ceph/ceph:v20, name=suspicious_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 11:51:07 np0005601226 podman[93129]: 2026-01-29 16:51:07.96803672 +0000 UTC m=+0.162933048 container attach f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:07 np0005601226 podman[93129]: 2026-01-29 16:51:07.968369699 +0000 UTC m=+0.163266027 container died f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cc264ecf9de3ffec205f314f36bf7803b46ee416edf0231135b9fd72544256cb-merged.mount: Deactivated successfully.
Jan 29 11:51:08 np0005601226 podman[93129]: 2026-01-29 16:51:08.010089216 +0000 UTC m=+0.204985564 container remove f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:51:08 np0005601226 systemd[1]: libpod-conmon-9326c336cf1cb964f4195d57e91d42ea9c6fa900499e9f168da4d2970a350a6b.scope: Deactivated successfully.
Jan 29 11:51:08 np0005601226 systemd[1]: libpod-conmon-f0ead4401bba563efc81ddc7e88b659db50ff0d0cd513fd5cfabefb06a0b0a89.scope: Deactivated successfully.
Jan 29 11:51:08 np0005601226 podman[93182]: 2026-01-29 16:51:08.119345128 +0000 UTC m=+0.036177122 container create 97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:51:08 np0005601226 systemd[1]: Started libpod-conmon-97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03.scope.
Jan 29 11:51:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840645df52c358a81c079a2859fb252142983f4b71ea13126529566c225328ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840645df52c358a81c079a2859fb252142983f4b71ea13126529566c225328ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840645df52c358a81c079a2859fb252142983f4b71ea13126529566c225328ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840645df52c358a81c079a2859fb252142983f4b71ea13126529566c225328ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840645df52c358a81c079a2859fb252142983f4b71ea13126529566c225328ea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:08 np0005601226 podman[93182]: 2026-01-29 16:51:08.10098617 +0000 UTC m=+0.017818174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:08 np0005601226 podman[93182]: 2026-01-29 16:51:08.19777171 +0000 UTC m=+0.114603704 container init 97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:08 np0005601226 podman[93182]: 2026-01-29 16:51:08.202914944 +0000 UTC m=+0.119746928 container start 97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:08 np0005601226 podman[93182]: 2026-01-29 16:51:08.21018899 +0000 UTC m=+0.127021004 container attach 97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:08 np0005601226 stoic_galileo[93198]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:51:08 np0005601226 stoic_galileo[93198]: --> All data devices are unavailable
Jan 29 11:51:08 np0005601226 systemd[1]: libpod-97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03.scope: Deactivated successfully.
Jan 29 11:51:08 np0005601226 podman[93182]: 2026-01-29 16:51:08.597656779 +0000 UTC m=+0.514488783 container died 97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-840645df52c358a81c079a2859fb252142983f4b71ea13126529566c225328ea-merged.mount: Deactivated successfully.
Jan 29 11:51:08 np0005601226 podman[93182]: 2026-01-29 16:51:08.649151812 +0000 UTC m=+0.565983806 container remove 97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 11:51:08 np0005601226 systemd[1]: libpod-conmon-97951b8618156ba8068881a49523c249df0e9fb5a694ef3101b97cb9f5b4bc03.scope: Deactivated successfully.
Jan 29 11:51:08 np0005601226 python3[93305]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:51:08 np0005601226 ceph-mon[75233]: Saving service rgw.rgw spec with placement compute-0
Jan 29 11:51:09 np0005601226 podman[93440]: 2026-01-29 16:51:09.049084764 +0000 UTC m=+0.052012608 container create 67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 11:51:09 np0005601226 systemd[1]: Started libpod-conmon-67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424.scope.
Jan 29 11:51:09 np0005601226 python3[93433]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705468.5854776-36611-196918482211463/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:51:09 np0005601226 podman[93440]: 2026-01-29 16:51:09.027619158 +0000 UTC m=+0.030547032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:09 np0005601226 podman[93440]: 2026-01-29 16:51:09.139574916 +0000 UTC m=+0.142502760 container init 67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:51:09 np0005601226 podman[93440]: 2026-01-29 16:51:09.143883997 +0000 UTC m=+0.146811841 container start 67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:09 np0005601226 upbeat_shirley[93457]: 167 167
Jan 29 11:51:09 np0005601226 systemd[1]: libpod-67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424.scope: Deactivated successfully.
Jan 29 11:51:09 np0005601226 podman[93440]: 2026-01-29 16:51:09.147997993 +0000 UTC m=+0.150925867 container attach 67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 11:51:09 np0005601226 podman[93440]: 2026-01-29 16:51:09.148341294 +0000 UTC m=+0.151269098 container died 67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 11:51:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-72bed7d16e70a53b80c8d0c0c06592354ed28cb0cb56b5ff68988e597e073d5b-merged.mount: Deactivated successfully.
Jan 29 11:51:09 np0005601226 podman[93440]: 2026-01-29 16:51:09.183151755 +0000 UTC m=+0.186079589 container remove 67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 11:51:09 np0005601226 systemd[1]: libpod-conmon-67407663764c762511125c1262fec8054af6b103200a377a0c2b12b529702424.scope: Deactivated successfully.
Jan 29 11:51:09 np0005601226 podman[93504]: 2026-01-29 16:51:09.313571034 +0000 UTC m=+0.042544001 container create 7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jackson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:09 np0005601226 systemd[1]: Started libpod-conmon-7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84.scope.
Jan 29 11:51:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de0275c7026231dcea706870813ad4315cd7214ead6fd45eceb810cded96f92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de0275c7026231dcea706870813ad4315cd7214ead6fd45eceb810cded96f92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de0275c7026231dcea706870813ad4315cd7214ead6fd45eceb810cded96f92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2de0275c7026231dcea706870813ad4315cd7214ead6fd45eceb810cded96f92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:09 np0005601226 podman[93504]: 2026-01-29 16:51:09.294365352 +0000 UTC m=+0.023338329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:09 np0005601226 podman[93504]: 2026-01-29 16:51:09.39142266 +0000 UTC m=+0.120395677 container init 7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jackson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:09 np0005601226 podman[93504]: 2026-01-29 16:51:09.405235949 +0000 UTC m=+0.134208886 container start 7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jackson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:09 np0005601226 podman[93504]: 2026-01-29 16:51:09.40988295 +0000 UTC m=+0.138855917 container attach 7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:09 np0005601226 python3[93549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:09 np0005601226 podman[93550]: 2026-01-29 16:51:09.625698239 +0000 UTC m=+0.049731185 container create 8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66 (image=quay.io/ceph/ceph:v20, name=beautiful_pascal, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 11:51:09 np0005601226 systemd[1]: Started libpod-conmon-8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66.scope.
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]: {
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:    "0": [
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:        {
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "devices": [
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "/dev/loop3"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            ],
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_name": "ceph_lv0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_size": "21470642176",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "name": "ceph_lv0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "tags": {
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.crush_device_class": "",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.encrypted": "0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osd_id": "0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.type": "block",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.vdo": "0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.with_tpm": "0"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            },
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "type": "block",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "vg_name": "ceph_vg0"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:        }
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:    ],
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:    "1": [
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:        {
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "devices": [
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "/dev/loop4"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            ],
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_name": "ceph_lv1",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_size": "21470642176",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "name": "ceph_lv1",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "tags": {
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.crush_device_class": "",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.encrypted": "0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osd_id": "1",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.type": "block",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.vdo": "0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.with_tpm": "0"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            },
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "type": "block",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "vg_name": "ceph_vg1"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:        }
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:    ],
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:    "2": [
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:        {
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "devices": [
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "/dev/loop5"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            ],
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_name": "ceph_lv2",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_size": "21470642176",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "name": "ceph_lv2",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "tags": {
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.crush_device_class": "",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.encrypted": "0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osd_id": "2",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.type": "block",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.vdo": "0",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:                "ceph.with_tpm": "0"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            },
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "type": "block",
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:            "vg_name": "ceph_vg2"
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:        }
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]:    ]
Jan 29 11:51:09 np0005601226 exciting_jackson[93519]: }
Jan 29 11:51:09 np0005601226 podman[93550]: 2026-01-29 16:51:09.599315454 +0000 UTC m=+0.023348470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd814f11153f72f75ccba9338a641077bb0ac1477c6feee8e879d90e7e269999/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd814f11153f72f75ccba9338a641077bb0ac1477c6feee8e879d90e7e269999/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd814f11153f72f75ccba9338a641077bb0ac1477c6feee8e879d90e7e269999/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:09 np0005601226 podman[93550]: 2026-01-29 16:51:09.713035752 +0000 UTC m=+0.137068728 container init 8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66 (image=quay.io/ceph/ceph:v20, name=beautiful_pascal, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:09 np0005601226 systemd[1]: libpod-7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84.scope: Deactivated successfully.
Jan 29 11:51:09 np0005601226 podman[93504]: 2026-01-29 16:51:09.716078428 +0000 UTC m=+0.445051375 container died 7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jackson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 11:51:09 np0005601226 podman[93550]: 2026-01-29 16:51:09.718823235 +0000 UTC m=+0.142856161 container start 8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66 (image=quay.io/ceph/ceph:v20, name=beautiful_pascal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:09 np0005601226 podman[93550]: 2026-01-29 16:51:09.73246076 +0000 UTC m=+0.156493696 container attach 8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66 (image=quay.io/ceph/ceph:v20, name=beautiful_pascal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2de0275c7026231dcea706870813ad4315cd7214ead6fd45eceb810cded96f92-merged.mount: Deactivated successfully.
Jan 29 11:51:09 np0005601226 podman[93504]: 2026-01-29 16:51:09.763489925 +0000 UTC m=+0.492462862 container remove 7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jackson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:09 np0005601226 systemd[1]: libpod-conmon-7c2d3756e2dcf3ab96b63f14a927eb4a22270fb3d41d45b621b42224895a4d84.scope: Deactivated successfully.
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 29 11:51:10 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0[75229]: 2026-01-29T16:51:10.130+0000 7f2d84bfb640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e2 new map
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-01-29T16:51:10:131384+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-29T16:51:10.131174+0000#012modified#0112026-01-29T16:51:10.131174+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 11:51:10 np0005601226 podman[93667]: 2026-01-29 16:51:10.146235241 +0000 UTC m=+0.044305610 container create 13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 29 11:51:10 np0005601226 systemd[1]: libpod-8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66.scope: Deactivated successfully.
Jan 29 11:51:10 np0005601226 podman[93550]: 2026-01-29 16:51:10.16852935 +0000 UTC m=+0.592562276 container died 8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66 (image=quay.io/ceph/ceph:v20, name=beautiful_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:10 np0005601226 systemd[1]: Started libpod-conmon-13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433.scope.
Jan 29 11:51:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fd814f11153f72f75ccba9338a641077bb0ac1477c6feee8e879d90e7e269999-merged.mount: Deactivated successfully.
Jan 29 11:51:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:10 np0005601226 podman[93550]: 2026-01-29 16:51:10.214943559 +0000 UTC m=+0.638976495 container remove 8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66 (image=quay.io/ceph/ceph:v20, name=beautiful_pascal, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 11:51:10 np0005601226 systemd[1]: libpod-conmon-8d85bc9bdfbb41cea70000901efeaef22d3811602d7e3164dd88e498756dcd66.scope: Deactivated successfully.
Jan 29 11:51:10 np0005601226 podman[93667]: 2026-01-29 16:51:10.123973313 +0000 UTC m=+0.022043722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:10 np0005601226 podman[93667]: 2026-01-29 16:51:10.229981093 +0000 UTC m=+0.128051522 container init 13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dewdney, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 11:51:10 np0005601226 podman[93667]: 2026-01-29 16:51:10.234929793 +0000 UTC m=+0.133000192 container start 13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dewdney, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:10 np0005601226 interesting_dewdney[93692]: 167 167
Jan 29 11:51:10 np0005601226 systemd[1]: libpod-13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433.scope: Deactivated successfully.
Jan 29 11:51:10 np0005601226 podman[93667]: 2026-01-29 16:51:10.239306827 +0000 UTC m=+0.137377216 container attach 13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:10 np0005601226 podman[93667]: 2026-01-29 16:51:10.240432899 +0000 UTC m=+0.138503288 container died 13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dewdney, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-581f0a5ff2658e2f71b83b03191086a291f3239c2a36d262a0b2fd15e75c0e60-merged.mount: Deactivated successfully.
Jan 29 11:51:10 np0005601226 podman[93667]: 2026-01-29 16:51:10.276110155 +0000 UTC m=+0.174180534 container remove 13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dewdney, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:10 np0005601226 systemd[1]: libpod-conmon-13e3815e44cddd80ca6b09014c125629f2055cf5d370b48b7195a86e8022a433.scope: Deactivated successfully.
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:10 np0005601226 podman[93727]: 2026-01-29 16:51:10.40253242 +0000 UTC m=+0.043455646 container create d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_poincare, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:10 np0005601226 systemd[1]: Started libpod-conmon-d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1.scope.
Jan 29 11:51:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19c9763dcda75e49a9601d94af7cfc432bef881791cb0b35349b030c2c83e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19c9763dcda75e49a9601d94af7cfc432bef881791cb0b35349b030c2c83e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19c9763dcda75e49a9601d94af7cfc432bef881791cb0b35349b030c2c83e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19c9763dcda75e49a9601d94af7cfc432bef881791cb0b35349b030c2c83e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:10 np0005601226 podman[93727]: 2026-01-29 16:51:10.383142004 +0000 UTC m=+0.024065220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:10 np0005601226 podman[93727]: 2026-01-29 16:51:10.502572173 +0000 UTC m=+0.143495469 container init d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_poincare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:10 np0005601226 podman[93727]: 2026-01-29 16:51:10.510524558 +0000 UTC m=+0.151447774 container start d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_poincare, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:51:10 np0005601226 podman[93727]: 2026-01-29 16:51:10.516714232 +0000 UTC m=+0.157637478 container attach d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:51:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:51:10 np0005601226 python3[93762]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:10 np0005601226 podman[93771]: 2026-01-29 16:51:10.630897512 +0000 UTC m=+0.048785866 container create fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:10 np0005601226 systemd[1]: Started libpod-conmon-fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade.scope.
Jan 29 11:51:10 np0005601226 podman[93771]: 2026-01-29 16:51:10.607359399 +0000 UTC m=+0.025247813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a8239d4cad83946fd9423a007802db32fb10df4859cdc157cc707a2028c675/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a8239d4cad83946fd9423a007802db32fb10df4859cdc157cc707a2028c675/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a8239d4cad83946fd9423a007802db32fb10df4859cdc157cc707a2028c675/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:10 np0005601226 podman[93771]: 2026-01-29 16:51:10.737531981 +0000 UTC m=+0.155420365 container init fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:10 np0005601226 podman[93771]: 2026-01-29 16:51:10.743960092 +0000 UTC m=+0.161848426 container start fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:51:10 np0005601226 podman[93771]: 2026-01-29 16:51:10.749342884 +0000 UTC m=+0.167231218 container attach fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade (image=quay.io/ceph/ceph:v20, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 29 11:51:10 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:11 np0005601226 lvm[93881]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:51:11 np0005601226 lvm[93881]: VG ceph_vg0 finished
Jan 29 11:51:11 np0005601226 lvm[93884]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:51:11 np0005601226 lvm[93884]: VG ceph_vg1 finished
Jan 29 11:51:11 np0005601226 lvm[93886]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:51:11 np0005601226 lvm[93886]: VG ceph_vg2 finished
Jan 29 11:51:11 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 11:51:11 np0005601226 ceph-mgr[75527]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 29 11:51:11 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:11 np0005601226 sweet_easley[93797]: Scheduled mds.cephfs update...
Jan 29 11:51:11 np0005601226 systemd[1]: libpod-fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade.scope: Deactivated successfully.
Jan 29 11:51:11 np0005601226 podman[93771]: 2026-01-29 16:51:11.20299349 +0000 UTC m=+0.620881854 container died fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade (image=quay.io/ceph/ceph:v20, name=sweet_easley, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:11 np0005601226 elastic_poincare[93766]: {}
Jan 29 11:51:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay-45a8239d4cad83946fd9423a007802db32fb10df4859cdc157cc707a2028c675-merged.mount: Deactivated successfully.
Jan 29 11:51:11 np0005601226 podman[93771]: 2026-01-29 16:51:11.238277075 +0000 UTC m=+0.656165399 container remove fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade (image=quay.io/ceph/ceph:v20, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:51:11 np0005601226 systemd[1]: libpod-conmon-fb043243d7e0081ac56c0129e376e7e3764295ce5ef8a730e119494efbb24ade.scope: Deactivated successfully.
Jan 29 11:51:11 np0005601226 systemd[1]: libpod-d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1.scope: Deactivated successfully.
Jan 29 11:51:11 np0005601226 systemd[1]: libpod-d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1.scope: Consumed 1.067s CPU time.
Jan 29 11:51:11 np0005601226 podman[93727]: 2026-01-29 16:51:11.257282632 +0000 UTC m=+0.898205858 container died d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 11:51:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay-bd19c9763dcda75e49a9601d94af7cfc432bef881791cb0b35349b030c2c83e0-merged.mount: Deactivated successfully.
Jan 29 11:51:11 np0005601226 podman[93727]: 2026-01-29 16:51:11.301846829 +0000 UTC m=+0.942770065 container remove d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_poincare, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:11 np0005601226 systemd[1]: libpod-conmon-d95ccc2513d90f69c16f4c4e7e695e26893ca816b441144934a1ea91a52a46d1.scope: Deactivated successfully.
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:11 np0005601226 podman[94034]: 2026-01-29 16:51:11.888287192 +0000 UTC m=+0.044834917 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: Saving service mds.cephfs spec with placement compute-0
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:12 np0005601226 podman[94034]: 2026-01-29 16:51:12.000666431 +0000 UTC m=+0.157214186 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 11:51:12 np0005601226 python3[94153]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 29 11:51:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:12 np0005601226 python3[94301]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769705471.9546921-36659-200462311754082/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=06b885591518abc5ff796737c70f725941229789 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:12 np0005601226 podman[94446]: 2026-01-29 16:51:12.924106039 +0000 UTC m=+0.031863749 container create 2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: Saving service mds.cephfs spec with placement compute-0
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:12 np0005601226 python3[94433]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:12 np0005601226 systemd[1]: Started libpod-conmon-2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0.scope.
Jan 29 11:51:12 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:12 np0005601226 podman[94446]: 2026-01-29 16:51:12.999337142 +0000 UTC m=+0.107094862 container init 2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:13 np0005601226 podman[94446]: 2026-01-29 16:51:13.005131405 +0000 UTC m=+0.112889105 container start 2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 11:51:13 np0005601226 podman[94446]: 2026-01-29 16:51:12.91100556 +0000 UTC m=+0.018763270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:13 np0005601226 podman[94446]: 2026-01-29 16:51:13.007870422 +0000 UTC m=+0.115628122 container attach 2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 11:51:13 np0005601226 nice_ardinghelli[94463]: 167 167
Jan 29 11:51:13 np0005601226 systemd[1]: libpod-2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0.scope: Deactivated successfully.
Jan 29 11:51:13 np0005601226 podman[94446]: 2026-01-29 16:51:13.010282801 +0000 UTC m=+0.118040501 container died 2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fef1159a6716d823e00a3873c31dc8b17979059feffa0ef8ea0b62b2bdb653a5-merged.mount: Deactivated successfully.
Jan 29 11:51:13 np0005601226 podman[94462]: 2026-01-29 16:51:13.033514585 +0000 UTC m=+0.055063934 container create 702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3 (image=quay.io/ceph/ceph:v20, name=tender_lehmann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:13 np0005601226 podman[94446]: 2026-01-29 16:51:13.053970523 +0000 UTC m=+0.161728243 container remove 2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ardinghelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 11:51:13 np0005601226 podman[94462]: 2026-01-29 16:51:13.007125462 +0000 UTC m=+0.028674841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:13 np0005601226 systemd[1]: Started libpod-conmon-702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3.scope.
Jan 29 11:51:13 np0005601226 systemd[1]: libpod-conmon-2916b78767663f2472f5508d0f32c3f12fb268080b71f460814831d89b6072f0.scope: Deactivated successfully.
Jan 29 11:51:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72eba9f4aa1d64adbc0da795307cf30d881c7592a234837b9a302bdcf874336/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72eba9f4aa1d64adbc0da795307cf30d881c7592a234837b9a302bdcf874336/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:13 np0005601226 podman[94462]: 2026-01-29 16:51:13.150635489 +0000 UTC m=+0.172184848 container init 702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3 (image=quay.io/ceph/ceph:v20, name=tender_lehmann, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:51:13 np0005601226 podman[94462]: 2026-01-29 16:51:13.159664885 +0000 UTC m=+0.181214234 container start 702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3 (image=quay.io/ceph/ceph:v20, name=tender_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:13 np0005601226 podman[94462]: 2026-01-29 16:51:13.163714518 +0000 UTC m=+0.185263867 container attach 702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3 (image=quay.io/ceph/ceph:v20, name=tender_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 11:51:13 np0005601226 podman[94504]: 2026-01-29 16:51:13.204428587 +0000 UTC m=+0.050193597 container create c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:13 np0005601226 systemd[1]: Started libpod-conmon-c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56.scope.
Jan 29 11:51:13 np0005601226 podman[94504]: 2026-01-29 16:51:13.179219966 +0000 UTC m=+0.024985066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe90cd884eb8e4daf139ac68329e2c7beb3f6944b066f9cfd2f4f0258070871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe90cd884eb8e4daf139ac68329e2c7beb3f6944b066f9cfd2f4f0258070871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe90cd884eb8e4daf139ac68329e2c7beb3f6944b066f9cfd2f4f0258070871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe90cd884eb8e4daf139ac68329e2c7beb3f6944b066f9cfd2f4f0258070871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffe90cd884eb8e4daf139ac68329e2c7beb3f6944b066f9cfd2f4f0258070871/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:13 np0005601226 podman[94504]: 2026-01-29 16:51:13.316672303 +0000 UTC m=+0.162437553 container init c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_solomon, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:13 np0005601226 podman[94504]: 2026-01-29 16:51:13.322992951 +0000 UTC m=+0.168757991 container start c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_solomon, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:51:13 np0005601226 podman[94504]: 2026-01-29 16:51:13.326596863 +0000 UTC m=+0.172361893 container attach c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 29 11:51:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 29 11:51:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1201914275' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 29 11:51:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1201914275' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 29 11:51:13 np0005601226 systemd[1]: libpod-702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3.scope: Deactivated successfully.
Jan 29 11:51:13 np0005601226 podman[94462]: 2026-01-29 16:51:13.648474513 +0000 UTC m=+0.670023862 container died 702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3 (image=quay.io/ceph/ceph:v20, name=tender_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c72eba9f4aa1d64adbc0da795307cf30d881c7592a234837b9a302bdcf874336-merged.mount: Deactivated successfully.
Jan 29 11:51:13 np0005601226 podman[94462]: 2026-01-29 16:51:13.691332761 +0000 UTC m=+0.712882110 container remove 702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3 (image=quay.io/ceph/ceph:v20, name=tender_lehmann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:13 np0005601226 systemd[1]: libpod-conmon-702d352d1b166b8bc1b7893251348339200da0dab2a1f7891524a3ddbeef19e3.scope: Deactivated successfully.
Jan 29 11:51:13 np0005601226 eloquent_solomon[94531]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:51:13 np0005601226 eloquent_solomon[94531]: --> All data devices are unavailable
Jan 29 11:51:13 np0005601226 systemd[1]: libpod-c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56.scope: Deactivated successfully.
Jan 29 11:51:13 np0005601226 podman[94504]: 2026-01-29 16:51:13.735324173 +0000 UTC m=+0.581089183 container died c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_solomon, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:13 np0005601226 podman[94504]: 2026-01-29 16:51:13.773727736 +0000 UTC m=+0.619492746 container remove c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:13 np0005601226 systemd[1]: libpod-conmon-c357e7914091962ec335f5b8a9884686a5384c06b52dcb98d26e964caa907d56.scope: Deactivated successfully.
Jan 29 11:51:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ffe90cd884eb8e4daf139ac68329e2c7beb3f6944b066f9cfd2f4f0258070871-merged.mount: Deactivated successfully.
Jan 29 11:51:13 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1201914275' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 29 11:51:13 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1201914275' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 29 11:51:14 np0005601226 podman[94655]: 2026-01-29 16:51:14.167322148 +0000 UTC m=+0.030709457 container create 2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_elgamal, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:14 np0005601226 systemd[1]: Started libpod-conmon-2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907.scope.
Jan 29 11:51:14 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:14 np0005601226 podman[94655]: 2026-01-29 16:51:14.226591051 +0000 UTC m=+0.089978380 container init 2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 11:51:14 np0005601226 podman[94655]: 2026-01-29 16:51:14.23156183 +0000 UTC m=+0.094949139 container start 2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:14 np0005601226 systemd[1]: libpod-2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907.scope: Deactivated successfully.
Jan 29 11:51:14 np0005601226 podman[94655]: 2026-01-29 16:51:14.234861783 +0000 UTC m=+0.098249092 container attach 2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_elgamal, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:14 np0005601226 quizzical_elgamal[94694]: 167 167
Jan 29 11:51:14 np0005601226 conmon[94694]: conmon 2fef3a8b418e4cd3f1cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907.scope/container/memory.events
Jan 29 11:51:14 np0005601226 podman[94655]: 2026-01-29 16:51:14.235627335 +0000 UTC m=+0.099014654 container died 2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_elgamal, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:14 np0005601226 podman[94655]: 2026-01-29 16:51:14.154030604 +0000 UTC m=+0.017417943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay-78a0a2e834cbfec18d8db006510db0c73c9db8fca1a492af0a309b01979214e8-merged.mount: Deactivated successfully.
Jan 29 11:51:14 np0005601226 podman[94655]: 2026-01-29 16:51:14.267657649 +0000 UTC m=+0.131044958 container remove 2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 11:51:14 np0005601226 systemd[1]: libpod-conmon-2fef3a8b418e4cd3f1cccb36b7659e3af23d6a6644b0ea2d809c44344071d907.scope: Deactivated successfully.
Jan 29 11:51:14 np0005601226 python3[94690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:14 np0005601226 podman[94714]: 2026-01-29 16:51:14.345759912 +0000 UTC m=+0.034722981 container create de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28 (image=quay.io/ceph/ceph:v20, name=vigorous_ritchie, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:14 np0005601226 podman[94731]: 2026-01-29 16:51:14.376126398 +0000 UTC m=+0.042026027 container create 43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_brown, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 11:51:14 np0005601226 systemd[1]: Started libpod-conmon-de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28.scope.
Jan 29 11:51:14 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec711172f69b6b82efeaca73519521b77f4e8d2e42d1ff99ef453f1dc56f6d2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bec711172f69b6b82efeaca73519521b77f4e8d2e42d1ff99ef453f1dc56f6d2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:14 np0005601226 systemd[1]: Started libpod-conmon-43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92.scope.
Jan 29 11:51:14 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07afc762535bc080bbe9b52ab8d1f6bb72b1ab9a7421bdd8bd91bc92b79f8fd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07afc762535bc080bbe9b52ab8d1f6bb72b1ab9a7421bdd8bd91bc92b79f8fd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07afc762535bc080bbe9b52ab8d1f6bb72b1ab9a7421bdd8bd91bc92b79f8fd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07afc762535bc080bbe9b52ab8d1f6bb72b1ab9a7421bdd8bd91bc92b79f8fd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:14 np0005601226 podman[94714]: 2026-01-29 16:51:14.421092776 +0000 UTC m=+0.110084326 container init de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28 (image=quay.io/ceph/ceph:v20, name=vigorous_ritchie, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 11:51:14 np0005601226 podman[94714]: 2026-01-29 16:51:14.328015841 +0000 UTC m=+0.016978940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:14 np0005601226 podman[94714]: 2026-01-29 16:51:14.429412501 +0000 UTC m=+0.118375580 container start de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28 (image=quay.io/ceph/ceph:v20, name=vigorous_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 11:51:14 np0005601226 podman[94731]: 2026-01-29 16:51:14.43223025 +0000 UTC m=+0.098129879 container init 43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_brown, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:14 np0005601226 podman[94731]: 2026-01-29 16:51:14.437787037 +0000 UTC m=+0.103686666 container start 43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_brown, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 11:51:14 np0005601226 podman[94714]: 2026-01-29 16:51:14.439375813 +0000 UTC m=+0.128338912 container attach de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28 (image=quay.io/ceph/ceph:v20, name=vigorous_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:14 np0005601226 podman[94731]: 2026-01-29 16:51:14.445288239 +0000 UTC m=+0.111187868 container attach 43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_brown, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:14 np0005601226 podman[94731]: 2026-01-29 16:51:14.357155313 +0000 UTC m=+0.023054962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]: {
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:    "0": [
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:        {
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "devices": [
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "/dev/loop3"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            ],
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_name": "ceph_lv0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_size": "21470642176",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "name": "ceph_lv0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "tags": {
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.crush_device_class": "",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.encrypted": "0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osd_id": "0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.type": "block",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.vdo": "0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.with_tpm": "0"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            },
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "type": "block",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "vg_name": "ceph_vg0"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:        }
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:    ],
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:    "1": [
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:        {
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "devices": [
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "/dev/loop4"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            ],
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_name": "ceph_lv1",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_size": "21470642176",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "name": "ceph_lv1",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "tags": {
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.crush_device_class": "",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.encrypted": "0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osd_id": "1",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.type": "block",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.vdo": "0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.with_tpm": "0"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            },
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "type": "block",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "vg_name": "ceph_vg1"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:        }
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:    ],
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:    "2": [
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:        {
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "devices": [
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "/dev/loop5"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            ],
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_name": "ceph_lv2",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_size": "21470642176",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "name": "ceph_lv2",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "tags": {
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.crush_device_class": "",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.encrypted": "0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osd_id": "2",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.type": "block",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.vdo": "0",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:                "ceph.with_tpm": "0"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            },
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "type": "block",
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:            "vg_name": "ceph_vg2"
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:        }
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]:    ]
Jan 29 11:51:14 np0005601226 hopeful_brown[94754]: }
Jan 29 11:51:14 np0005601226 systemd[1]: libpod-43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92.scope: Deactivated successfully.
Jan 29 11:51:14 np0005601226 podman[94783]: 2026-01-29 16:51:14.753449452 +0000 UTC m=+0.019589044 container died 43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_brown, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:51:14 np0005601226 podman[94783]: 2026-01-29 16:51:14.81541913 +0000 UTC m=+0.081558702 container remove 43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 11:51:14 np0005601226 systemd[1]: libpod-conmon-43e39753b5fc56853ed564dfa4fff824cf868db7c4d660d3fdce4de75eb0ea92.scope: Deactivated successfully.
Jan 29 11:51:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 11:51:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868248999' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 11:51:14 np0005601226 vigorous_ritchie[94749]: 
Jan 29 11:51:14 np0005601226 vigorous_ritchie[94749]: {"fsid":"cc5c72e3-31e0-58b9-8731-456117d38f4a","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":120,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1769705442,"num_in_osds":3,"osd_in_since":1769705419,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83861504,"bytes_avail":64328065024,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-29T16:51:10:131384+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-29T16:50:40.323321+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 29 11:51:14 np0005601226 systemd[1]: libpod-de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28.scope: Deactivated successfully.
Jan 29 11:51:14 np0005601226 podman[94714]: 2026-01-29 16:51:14.922658395 +0000 UTC m=+0.611621474 container died de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28 (image=quay.io/ceph/ceph:v20, name=vigorous_ritchie, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:51:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay-07afc762535bc080bbe9b52ab8d1f6bb72b1ab9a7421bdd8bd91bc92b79f8fd9-merged.mount: Deactivated successfully.
Jan 29 11:51:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay-bec711172f69b6b82efeaca73519521b77f4e8d2e42d1ff99ef453f1dc56f6d2-merged.mount: Deactivated successfully.
Jan 29 11:51:14 np0005601226 podman[94714]: 2026-01-29 16:51:14.960943135 +0000 UTC m=+0.649906214 container remove de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28 (image=quay.io/ceph/ceph:v20, name=vigorous_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:51:14 np0005601226 systemd[1]: libpod-conmon-de3e6b7780c5dfe93db562a051e1f0110cb0fed8d7f8e2613f43366d492ade28.scope: Deactivated successfully.
Jan 29 11:51:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:15 np0005601226 podman[94900]: 2026-01-29 16:51:15.201977644 +0000 UTC m=+0.032944040 container create 22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 11:51:15 np0005601226 python3[94887]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:15 np0005601226 systemd[1]: Started libpod-conmon-22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e.scope.
Jan 29 11:51:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:15 np0005601226 podman[94916]: 2026-01-29 16:51:15.261880854 +0000 UTC m=+0.032412616 container create 8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a (image=quay.io/ceph/ceph:v20, name=heuristic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:51:15 np0005601226 podman[94900]: 2026-01-29 16:51:15.265797374 +0000 UTC m=+0.096763780 container init 22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 11:51:15 np0005601226 podman[94900]: 2026-01-29 16:51:15.270865117 +0000 UTC m=+0.101831493 container start 22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:51:15 np0005601226 fervent_davinci[94922]: 167 167
Jan 29 11:51:15 np0005601226 systemd[1]: libpod-22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e.scope: Deactivated successfully.
Jan 29 11:51:15 np0005601226 conmon[94922]: conmon 22bf05fd0be7af8eb504 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e.scope/container/memory.events
Jan 29 11:51:15 np0005601226 podman[94900]: 2026-01-29 16:51:15.279781869 +0000 UTC m=+0.110748275 container attach 22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 11:51:15 np0005601226 podman[94900]: 2026-01-29 16:51:15.280005945 +0000 UTC m=+0.110972321 container died 22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:15 np0005601226 podman[94900]: 2026-01-29 16:51:15.187436574 +0000 UTC m=+0.018402970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:15 np0005601226 systemd[1]: Started libpod-conmon-8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a.scope.
Jan 29 11:51:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ea559a29c4d4d967682de72e680c77d98c71f36fc59d0599552b96d749e489ad-merged.mount: Deactivated successfully.
Jan 29 11:51:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f302bbea6c571e683d8e07441a546092a4442318e1ae3ea77dd3905502de1cf4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f302bbea6c571e683d8e07441a546092a4442318e1ae3ea77dd3905502de1cf4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:15 np0005601226 podman[94916]: 2026-01-29 16:51:15.318325465 +0000 UTC m=+0.088857257 container init 8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a (image=quay.io/ceph/ceph:v20, name=heuristic_proskuriakova, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:15 np0005601226 podman[94916]: 2026-01-29 16:51:15.322497483 +0000 UTC m=+0.093029245 container start 8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a (image=quay.io/ceph/ceph:v20, name=heuristic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:15 np0005601226 podman[94916]: 2026-01-29 16:51:15.326494686 +0000 UTC m=+0.097026448 container attach 8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a (image=quay.io/ceph/ceph:v20, name=heuristic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:51:15 np0005601226 podman[94900]: 2026-01-29 16:51:15.322662238 +0000 UTC m=+0.153628624 container remove 22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_davinci, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:15 np0005601226 systemd[1]: libpod-conmon-22bf05fd0be7af8eb504e8d05b67b84335c9e4ca99c248aa2a5aeacad140117e.scope: Deactivated successfully.
Jan 29 11:51:15 np0005601226 podman[94916]: 2026-01-29 16:51:15.247519629 +0000 UTC m=+0.018051391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:15 np0005601226 podman[94981]: 2026-01-29 16:51:15.465300281 +0000 UTC m=+0.037193819 container create 003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_moore, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 11:51:15 np0005601226 systemd[1]: Started libpod-conmon-003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610.scope.
Jan 29 11:51:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444dac615787b118300649e8e25fe5f38cc33082d0ee848357b0deabe1165dee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444dac615787b118300649e8e25fe5f38cc33082d0ee848357b0deabe1165dee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444dac615787b118300649e8e25fe5f38cc33082d0ee848357b0deabe1165dee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444dac615787b118300649e8e25fe5f38cc33082d0ee848357b0deabe1165dee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:15 np0005601226 podman[94981]: 2026-01-29 16:51:15.532575749 +0000 UTC m=+0.104469287 container init 003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_moore, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:15 np0005601226 podman[94981]: 2026-01-29 16:51:15.536652574 +0000 UTC m=+0.108546092 container start 003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 11:51:15 np0005601226 podman[94981]: 2026-01-29 16:51:15.539649749 +0000 UTC m=+0.111543277 container attach 003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 11:51:15 np0005601226 podman[94981]: 2026-01-29 16:51:15.449903007 +0000 UTC m=+0.021796555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 11:51:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3909700369' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 11:51:15 np0005601226 heuristic_proskuriakova[94951]: 
Jan 29 11:51:15 np0005601226 heuristic_proskuriakova[94951]: {"epoch":1,"fsid":"cc5c72e3-31e0-58b9-8731-456117d38f4a","modified":"2026-01-29T16:49:09.219895Z","created":"2026-01-29T16:49:09.219895Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 29 11:51:15 np0005601226 heuristic_proskuriakova[94951]: dumped monmap epoch 1
Jan 29 11:51:15 np0005601226 systemd[1]: libpod-8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a.scope: Deactivated successfully.
Jan 29 11:51:15 np0005601226 podman[94916]: 2026-01-29 16:51:15.82754733 +0000 UTC m=+0.598079102 container died 8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a (image=quay.io/ceph/ceph:v20, name=heuristic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 11:51:15 np0005601226 podman[94916]: 2026-01-29 16:51:15.862454095 +0000 UTC m=+0.632985857 container remove 8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a (image=quay.io/ceph/ceph:v20, name=heuristic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:15 np0005601226 systemd[1]: libpod-conmon-8a7569fb81e218d5cefcb50997d81d13b051a7c723431b487922bdf04d36d25a.scope: Deactivated successfully.
Jan 29 11:51:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f302bbea6c571e683d8e07441a546092a4442318e1ae3ea77dd3905502de1cf4-merged.mount: Deactivated successfully.
Jan 29 11:51:16 np0005601226 lvm[95089]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:51:16 np0005601226 lvm[95089]: VG ceph_vg1 finished
Jan 29 11:51:16 np0005601226 lvm[95086]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:51:16 np0005601226 lvm[95086]: VG ceph_vg0 finished
Jan 29 11:51:16 np0005601226 lvm[95091]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:51:16 np0005601226 lvm[95091]: VG ceph_vg2 finished
Jan 29 11:51:16 np0005601226 lvm[95098]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:51:16 np0005601226 lvm[95098]: VG ceph_vg0 finished
Jan 29 11:51:16 np0005601226 funny_moore[94998]: {}
Jan 29 11:51:16 np0005601226 systemd[1]: libpod-003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610.scope: Deactivated successfully.
Jan 29 11:51:16 np0005601226 podman[94981]: 2026-01-29 16:51:16.296844658 +0000 UTC m=+0.868738176 container died 003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:51:16 np0005601226 systemd[1]: libpod-003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610.scope: Consumed 1.010s CPU time.
Jan 29 11:51:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:16 np0005601226 python3[95119]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:16 np0005601226 systemd[1]: var-lib-containers-storage-overlay-444dac615787b118300649e8e25fe5f38cc33082d0ee848357b0deabe1165dee-merged.mount: Deactivated successfully.
Jan 29 11:51:16 np0005601226 podman[95132]: 2026-01-29 16:51:16.427863854 +0000 UTC m=+0.028370042 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:16 np0005601226 podman[94981]: 2026-01-29 16:51:16.596359556 +0000 UTC m=+1.168253094 container remove 003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_moore, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:16 np0005601226 systemd[1]: libpod-conmon-003083735e2d2172ed4990d6afb354880fff5bfc712fba864f1931b4c24a2610.scope: Deactivated successfully.
Jan 29 11:51:16 np0005601226 podman[95132]: 2026-01-29 16:51:16.628645747 +0000 UTC m=+0.229151915 container create 2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2 (image=quay.io/ceph/ceph:v20, name=elastic_napier, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:16 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 7f9b19ea-e98e-4a31-9857-898da1b801a7 (Updating rgw.rgw deployment (+1 -> 1))
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gtpysq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gtpysq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 29 11:51:16 np0005601226 systemd[1]: Started libpod-conmon-2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2.scope.
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gtpysq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:16 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.gtpysq on compute-0
Jan 29 11:51:16 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.gtpysq on compute-0
Jan 29 11:51:16 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:16 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b7fc134fa7d658e00311ceb2e193a65af0e3a2ce878d9779afee94fa1fbcf8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:16 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b7fc134fa7d658e00311ceb2e193a65af0e3a2ce878d9779afee94fa1fbcf8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:16 np0005601226 podman[95132]: 2026-01-29 16:51:16.725809128 +0000 UTC m=+0.326315326 container init 2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2 (image=quay.io/ceph/ceph:v20, name=elastic_napier, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:16 np0005601226 podman[95132]: 2026-01-29 16:51:16.730998984 +0000 UTC m=+0.331505122 container start 2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2 (image=quay.io/ceph/ceph:v20, name=elastic_napier, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:16 np0005601226 podman[95132]: 2026-01-29 16:51:16.734343459 +0000 UTC m=+0.334849647 container attach 2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2 (image=quay.io/ceph/ceph:v20, name=elastic_napier, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 11:51:17 np0005601226 podman[95262]: 2026-01-29 16:51:17.154534772 +0000 UTC m=+0.034779333 container create 2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:51:17 np0005601226 systemd[1]: Started libpod-conmon-2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184.scope.
Jan 29 11:51:17 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:17 np0005601226 podman[95262]: 2026-01-29 16:51:17.227084648 +0000 UTC m=+0.107329259 container init 2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:51:17 np0005601226 podman[95262]: 2026-01-29 16:51:17.232274434 +0000 UTC m=+0.112519035 container start 2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:17 np0005601226 podman[95262]: 2026-01-29 16:51:17.138708765 +0000 UTC m=+0.018953356 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:17 np0005601226 magical_yalow[95278]: 167 167
Jan 29 11:51:17 np0005601226 systemd[1]: libpod-2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184.scope: Deactivated successfully.
Jan 29 11:51:17 np0005601226 conmon[95278]: conmon 2713a6b827624195f81c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184.scope/container/memory.events
Jan 29 11:51:17 np0005601226 podman[95262]: 2026-01-29 16:51:17.238669565 +0000 UTC m=+0.118914216 container attach 2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 11:51:17 np0005601226 podman[95262]: 2026-01-29 16:51:17.239141348 +0000 UTC m=+0.119385899 container died 2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8d0339485be7fa9886f9b4859869e2b45676e6e030661cdd30bee3c6d2fa29cb-merged.mount: Deactivated successfully.
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3341406128' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 29 11:51:17 np0005601226 elastic_napier[95148]: [client.openstack]
Jan 29 11:51:17 np0005601226 elastic_napier[95148]: #011key = AQBfj3tpAAAAABAAZekTQ22xEFAz0za+SnmgoQ==
Jan 29 11:51:17 np0005601226 elastic_napier[95148]: #011caps mgr = "allow *"
Jan 29 11:51:17 np0005601226 elastic_napier[95148]: #011caps mon = "profile rbd"
Jan 29 11:51:17 np0005601226 elastic_napier[95148]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 29 11:51:17 np0005601226 podman[95262]: 2026-01-29 16:51:17.282526982 +0000 UTC m=+0.162771543 container remove 2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 11:51:17 np0005601226 systemd[1]: libpod-2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2.scope: Deactivated successfully.
Jan 29 11:51:17 np0005601226 podman[95132]: 2026-01-29 16:51:17.28708081 +0000 UTC m=+0.887586958 container died 2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2 (image=quay.io/ceph/ceph:v20, name=elastic_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:17 np0005601226 systemd[1]: libpod-conmon-2713a6b827624195f81cde2921138f3c313b8aa13f5c990c2e22a67187d07184.scope: Deactivated successfully.
Jan 29 11:51:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e8b7fc134fa7d658e00311ceb2e193a65af0e3a2ce878d9779afee94fa1fbcf8-merged.mount: Deactivated successfully.
Jan 29 11:51:17 np0005601226 systemd[1]: Reloading.
Jan 29 11:51:17 np0005601226 podman[95132]: 2026-01-29 16:51:17.337562064 +0000 UTC m=+0.938068212 container remove 2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2 (image=quay.io/ceph/ceph:v20, name=elastic_napier, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:17 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:51:17 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:51:17 np0005601226 systemd[1]: libpod-conmon-2a850cc9a5de7e86f9c8e92de58ee91a6853f9dd913b508c822b561f64027bf2.scope: Deactivated successfully.
Jan 29 11:51:17 np0005601226 systemd[1]: Reloading.
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gtpysq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gtpysq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: Deploying daemon rgw.rgw.compute-0.gtpysq on compute-0
Jan 29 11:51:17 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/3341406128' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 29 11:51:17 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:51:17 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:51:17 np0005601226 systemd[1]: Starting Ceph rgw.rgw.compute-0.gtpysq for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:51:18 np0005601226 podman[95433]: 2026-01-29 16:51:18.085906024 +0000 UTC m=+0.045545617 container create 5c717c91b2db3640677561c5b6e8ce82531c7bcb99d94f5a62fb0111536958c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-rgw-rgw-compute-0-gtpysq, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:51:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71dfb3e87c2e6889492dc660f460a9a826e20044146adcdcdf655e5bbfac10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71dfb3e87c2e6889492dc660f460a9a826e20044146adcdcdf655e5bbfac10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71dfb3e87c2e6889492dc660f460a9a826e20044146adcdcdf655e5bbfac10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea71dfb3e87c2e6889492dc660f460a9a826e20044146adcdcdf655e5bbfac10/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.gtpysq supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:18 np0005601226 podman[95433]: 2026-01-29 16:51:18.061450163 +0000 UTC m=+0.021089806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:18 np0005601226 podman[95433]: 2026-01-29 16:51:18.165139729 +0000 UTC m=+0.124779322 container init 5c717c91b2db3640677561c5b6e8ce82531c7bcb99d94f5a62fb0111536958c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-rgw-rgw-compute-0-gtpysq, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 11:51:18 np0005601226 podman[95433]: 2026-01-29 16:51:18.176828888 +0000 UTC m=+0.136468451 container start 5c717c91b2db3640677561c5b6e8ce82531c7bcb99d94f5a62fb0111536958c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-rgw-rgw-compute-0-gtpysq, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 29 11:51:18 np0005601226 bash[95433]: 5c717c91b2db3640677561c5b6e8ce82531c7bcb99d94f5a62fb0111536958c3
Jan 29 11:51:18 np0005601226 systemd[1]: Started Ceph rgw.rgw.compute-0.gtpysq for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:51:18 np0005601226 radosgw[95453]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:51:18 np0005601226 radosgw[95453]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 29 11:51:18 np0005601226 radosgw[95453]: framework: beast
Jan 29 11:51:18 np0005601226 radosgw[95453]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 29 11:51:18 np0005601226 radosgw[95453]: init_numa not setting numa affinity
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 7f9b19ea-e98e-4a31-9857-898da1b801a7 (Updating rgw.rgw deployment (+1 -> 1))
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 7f9b19ea-e98e-4a31-9857-898da1b801a7 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev d00910e0-13c7-4e15-85c5-075736cbd85e (Updating mds.cephfs deployment (+1 -> 1))
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cflubi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cflubi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cflubi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.cflubi on compute-0
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.cflubi on compute-0
Jan 29 11:51:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 29 11:51:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1884779344' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 29 11:51:18 np0005601226 ansible-async_wrapper.py[95681]: Invoked with j125693458161 30 /home/zuul/.ansible/tmp/ansible-tmp-1769705478.3341417-36731-87354049333505/AnsiballZ_command.py _
Jan 29 11:51:18 np0005601226 ansible-async_wrapper.py[95714]: Starting module and watcher
Jan 29 11:51:18 np0005601226 ansible-async_wrapper.py[95714]: Start watching 95715 (30)
Jan 29 11:51:18 np0005601226 ansible-async_wrapper.py[95715]: Start module (95715)
Jan 29 11:51:18 np0005601226 ansible-async_wrapper.py[95681]: Return async_wrapper task started.
Jan 29 11:51:18 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:18 np0005601226 podman[95727]: 2026-01-29 16:51:18.93552641 +0000 UTC m=+0.046115392 container create c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:18 np0005601226 systemd[1]: Started libpod-conmon-c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116.scope.
Jan 29 11:51:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:19 np0005601226 podman[95727]: 2026-01-29 16:51:18.911760879 +0000 UTC m=+0.022349921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:19 np0005601226 python3[95716]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:19 np0005601226 podman[95727]: 2026-01-29 16:51:19.018180281 +0000 UTC m=+0.128769283 container init c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:51:19 np0005601226 podman[95727]: 2026-01-29 16:51:19.025081346 +0000 UTC m=+0.135670328 container start c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 11:51:19 np0005601226 laughing_hypatia[95744]: 167 167
Jan 29 11:51:19 np0005601226 systemd[1]: libpod-c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116.scope: Deactivated successfully.
Jan 29 11:51:19 np0005601226 podman[95727]: 2026-01-29 16:51:19.033235926 +0000 UTC m=+0.143824958 container attach c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hypatia, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:19 np0005601226 podman[95727]: 2026-01-29 16:51:19.033656978 +0000 UTC m=+0.144245970 container died c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 11:51:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7134bdd92caa202a4ccdb008bd9a8338bfa8f4d3cc81b77fd67b90d06ac1d888-merged.mount: Deactivated successfully.
Jan 29 11:51:19 np0005601226 podman[95727]: 2026-01-29 16:51:19.088221917 +0000 UTC m=+0.198810889 container remove c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 11:51:19 np0005601226 podman[95747]: 2026-01-29 16:51:19.113072848 +0000 UTC m=+0.085073591 container create 61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b (image=quay.io/ceph/ceph:v20, name=eloquent_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:19 np0005601226 podman[95747]: 2026-01-29 16:51:19.054976439 +0000 UTC m=+0.026977192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:19 np0005601226 systemd[1]: Started libpod-conmon-61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b.scope.
Jan 29 11:51:19 np0005601226 systemd[1]: libpod-conmon-c359704164fa68b15e75bcbb76a61d6a59516b7eafb19566f1a6549543c6b116.scope: Deactivated successfully.
Jan 29 11:51:19 np0005601226 systemd[1]: Reloading.
Jan 29 11:51:19 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:51:19 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: Saving service rgw.rgw spec with placement compute-0
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cflubi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cflubi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: Deploying daemon mds.cephfs.compute-0.cflubi on compute-0
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1884779344' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 29 11:51:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961d58b523c14847e13ba6798f32dfdf3f6f2d19d48ee91e62817c79fca5d28e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961d58b523c14847e13ba6798f32dfdf3f6f2d19d48ee91e62817c79fca5d28e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:19 np0005601226 podman[95747]: 2026-01-29 16:51:19.416452135 +0000 UTC m=+0.388452908 container init 61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b (image=quay.io/ceph/ceph:v20, name=eloquent_williams, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 11:51:19 np0005601226 podman[95747]: 2026-01-29 16:51:19.423798152 +0000 UTC m=+0.395798905 container start 61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b (image=quay.io/ceph/ceph:v20, name=eloquent_williams, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:51:19 np0005601226 podman[95747]: 2026-01-29 16:51:19.433868357 +0000 UTC m=+0.405869120 container attach 61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b (image=quay.io/ceph/ceph:v20, name=eloquent_williams, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:51:19 np0005601226 systemd[1]: Reloading.
Jan 29 11:51:19 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:51:19 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1884779344' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 29 11:51:19 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 29 11:51:19 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:19 np0005601226 systemd[1]: Starting Ceph mds.cephfs.compute-0.cflubi for cc5c72e3-31e0-58b9-8731-456117d38f4a...
Jan 29 11:51:19 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 11:51:19 np0005601226 eloquent_williams[95776]: 
Jan 29 11:51:19 np0005601226 eloquent_williams[95776]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 11:51:19 np0005601226 podman[96489]: 2026-01-29 16:51:19.889735376 +0000 UTC m=+0.051409781 container create 469d57b85c2b67a02fd3b4b54de5c8273a30a3c59d006b1002ca9f7272a9c177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mds-cephfs-compute-0-cflubi, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:19 np0005601226 systemd[1]: libpod-61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b.scope: Deactivated successfully.
Jan 29 11:51:19 np0005601226 podman[95747]: 2026-01-29 16:51:19.901372945 +0000 UTC m=+0.873373678 container died 61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b (image=quay.io/ceph/ceph:v20, name=eloquent_williams, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 11:51:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5943b21c19540be0477d5d1cf1df59508824e9ca3c8b4c5a800f5c104de53f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5943b21c19540be0477d5d1cf1df59508824e9ca3c8b4c5a800f5c104de53f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5943b21c19540be0477d5d1cf1df59508824e9ca3c8b4c5a800f5c104de53f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5943b21c19540be0477d5d1cf1df59508824e9ca3c8b4c5a800f5c104de53f/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.cflubi supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-961d58b523c14847e13ba6798f32dfdf3f6f2d19d48ee91e62817c79fca5d28e-merged.mount: Deactivated successfully.
Jan 29 11:51:19 np0005601226 podman[96489]: 2026-01-29 16:51:19.954015469 +0000 UTC m=+0.115689884 container init 469d57b85c2b67a02fd3b4b54de5c8273a30a3c59d006b1002ca9f7272a9c177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mds-cephfs-compute-0-cflubi, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:19 np0005601226 podman[96489]: 2026-01-29 16:51:19.860085519 +0000 UTC m=+0.021759934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:19 np0005601226 podman[96489]: 2026-01-29 16:51:19.958152946 +0000 UTC m=+0.119827351 container start 469d57b85c2b67a02fd3b4b54de5c8273a30a3c59d006b1002ca9f7272a9c177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mds-cephfs-compute-0-cflubi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:19 np0005601226 bash[96489]: 469d57b85c2b67a02fd3b4b54de5c8273a30a3c59d006b1002ca9f7272a9c177
Jan 29 11:51:19 np0005601226 systemd[1]: Started Ceph mds.cephfs.compute-0.cflubi for cc5c72e3-31e0-58b9-8731-456117d38f4a.
Jan 29 11:51:19 np0005601226 ceph-mds[96568]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:51:19 np0005601226 podman[95747]: 2026-01-29 16:51:19.993899744 +0000 UTC m=+0.965900477 container remove 61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b (image=quay.io/ceph/ceph:v20, name=eloquent_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 11:51:19 np0005601226 ceph-mds[96568]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 29 11:51:19 np0005601226 ceph-mds[96568]: main not setting numa affinity
Jan 29 11:51:19 np0005601226 ceph-mds[96568]: pidfile_write: ignore empty --pid-file
Jan 29 11:51:19 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mds-cephfs-compute-0-cflubi[96540]: starting mds.cephfs.compute-0.cflubi at 
Jan 29 11:51:19 np0005601226 systemd[1]: libpod-conmon-61434e24869c99ca26d22aa4b84176d7b4bffd5c58d43ff6c7b6d6c60ec3ad5b.scope: Deactivated successfully.
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi Updating MDS map to version 2 from mon.0
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:20 np0005601226 ansible-async_wrapper.py[95715]: Module complete (95715)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev d00910e0-13c7-4e15-85c5-075736cbd85e (Updating mds.cephfs deployment (+1 -> 1))
Jan 29 11:51:20 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event d00910e0-13c7-4e15-85c5-075736cbd85e (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 python3[96570]: ansible-ansible.legacy.async_status Invoked with jid=j125693458161.95681 mode=status _async_dir=/root/.ansible_async
Jan 29 11:51:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v76: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:20 np0005601226 python3[96711]: ansible-ansible.legacy.async_status Invoked with jid=j125693458161.95681 mode=cleanup _async_dir=/root/.ansible_async
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/1884779344' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 podman[96757]: 2026-01-29 16:51:20.680946184 +0000 UTC m=+0.077775984 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e3 new map
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-01-29T16:51:20:674866+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-29T16:51:10.131174+0000#012modified#0112026-01-29T16:51:10.131174+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.cflubi{-1:14256} state up:standby seq 1 addr [v2:192.168.122.100:6814/2924524324,v1:192.168.122.100:6815/2924524324] compat {c=[1],r=[1],i=[1fff]}]
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi Updating MDS map to version 3 from mon.0
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi Monitors have assigned me to become a standby
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2924524324,v1:192.168.122.100:6815/2924524324] up:boot
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2924524324,v1:192.168.122.100:6815/2924524324] as mds.0
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.cflubi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.cflubi"} v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.cflubi"} : dispatch
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e3 all = 0
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e4 new map
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-01-29T16:51:20:699430+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-29T16:51:10.131174+0000#012modified#0112026-01-29T16:51:20.699422+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14256}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.cflubi{0:14256} state up:creating seq 1 addr [v2:192.168.122.100:6814/2924524324,v1:192.168.122.100:6815/2924524324] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi Updating MDS map to version 4 from mon.0
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x1
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x100
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x600
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.cflubi=up:creating}
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x601
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x602
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x603
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x604
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x605
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x606
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x607
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x608
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.cache creating system inode with ino:0x609
Jan 29 11:51:20 np0005601226 ceph-mds[96568]: mds.0.4 creating_done
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.cflubi is now active in filesystem cephfs as rank 0
Jan 29 11:51:20 np0005601226 podman[96757]: 2026-01-29 16:51:20.777805717 +0000 UTC m=+0.174635487 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 11:51:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:20 np0005601226 python3[96812]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:20 np0005601226 podman[96848]: 2026-01-29 16:51:20.907518155 +0000 UTC m=+0.030361068 container create 2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a (image=quay.io/ceph/ceph:v20, name=recursing_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:20 np0005601226 ceph-mgr[75527]: [progress INFO root] Writing back 5 completed events
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 11:51:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:20 np0005601226 systemd[1]: Started libpod-conmon-2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a.scope.
Jan 29 11:51:20 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d041aac99ffc4e8a728557a08a5086ed43d2fe97d70fc128de724c0f0f0899b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d041aac99ffc4e8a728557a08a5086ed43d2fe97d70fc128de724c0f0f0899b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:20 np0005601226 podman[96848]: 2026-01-29 16:51:20.979694822 +0000 UTC m=+0.102537755 container init 2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a (image=quay.io/ceph/ceph:v20, name=recursing_bhabha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:20 np0005601226 podman[96848]: 2026-01-29 16:51:20.985484505 +0000 UTC m=+0.108327418 container start 2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a (image=quay.io/ceph/ceph:v20, name=recursing_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 11:51:20 np0005601226 podman[96848]: 2026-01-29 16:51:20.988929082 +0000 UTC m=+0.111772025 container attach 2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a (image=quay.io/ceph/ceph:v20, name=recursing_bhabha, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:20 np0005601226 podman[96848]: 2026-01-29 16:51:20.89527429 +0000 UTC m=+0.018117233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:21 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 11:51:21 np0005601226 recursing_bhabha[96883]: 
Jan 29 11:51:21 np0005601226 recursing_bhabha[96883]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 29 11:51:21 np0005601226 systemd[1]: libpod-2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a.scope: Deactivated successfully.
Jan 29 11:51:21 np0005601226 podman[96848]: 2026-01-29 16:51:21.391307872 +0000 UTC m=+0.514150785 container died 2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a (image=quay.io/ceph/ceph:v20, name=recursing_bhabha, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:21 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0d041aac99ffc4e8a728557a08a5086ed43d2fe97d70fc128de724c0f0f0899b-merged.mount: Deactivated successfully.
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 29 11:51:21 np0005601226 podman[96848]: 2026-01-29 16:51:21.887250712 +0000 UTC m=+1.010093625 container remove 2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a (image=quay.io/ceph/ceph:v20, name=recursing_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:21 np0005601226 systemd[1]: libpod-conmon-2db9abdf0a58654f5e1f265eba70411f277c4b0af9872e332c2bed56bb133f6a.scope: Deactivated successfully.
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e5 new map
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-01-29T16:51:21:742369+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-29T16:51:10.131174+0000#012modified#0112026-01-29T16:51:21.742367+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14256}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14256 members: 14256#012[mds.cephfs.compute-0.cflubi{0:14256} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2924524324,v1:192.168.122.100:6815/2924524324] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: daemon mds.cephfs.compute-0.cflubi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: Cluster is now healthy
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: daemon mds.cephfs.compute-0.cflubi is now active in filesystem cephfs as rank 0
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:21 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi Updating MDS map to version 5 from mon.0
Jan 29 11:51:21 np0005601226 ceph-mds[96568]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 29 11:51:21 np0005601226 ceph-mds[96568]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 29 11:51:21 np0005601226 ceph-mds[96568]: mds.0.4 recovery_done -- successful recovery!
Jan 29 11:51:21 np0005601226 ceph-mds[96568]: mds.0.4 active_start
Jan 29 11:51:21 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2924524324,v1:192.168.122.100:6815/2924524324] up:active
Jan 29 11:51:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.cflubi=up:active}
Jan 29 11:51:22 np0005601226 podman[97092]: 2026-01-29 16:51:22.047712897 +0000 UTC m=+0.069909802 container create fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_burnell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:51:22 np0005601226 podman[97092]: 2026-01-29 16:51:21.996292398 +0000 UTC m=+0.018489323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:22 np0005601226 systemd[1]: Started libpod-conmon-fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229.scope.
Jan 29 11:51:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:22 np0005601226 podman[97092]: 2026-01-29 16:51:22.146014051 +0000 UTC m=+0.168210956 container init fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_burnell, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 11:51:22 np0005601226 podman[97092]: 2026-01-29 16:51:22.151586358 +0000 UTC m=+0.173783263 container start fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 11:51:22 np0005601226 jovial_burnell[97111]: 167 167
Jan 29 11:51:22 np0005601226 systemd[1]: libpod-fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229.scope: Deactivated successfully.
Jan 29 11:51:22 np0005601226 podman[97092]: 2026-01-29 16:51:22.173184337 +0000 UTC m=+0.195381252 container attach fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 11:51:22 np0005601226 podman[97092]: 2026-01-29 16:51:22.174012781 +0000 UTC m=+0.196209686 container died fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_burnell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 11:51:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fabdb0730fa673a635a2e782d980d552dad878fcd7582a6f701af0f1f3149625-merged.mount: Deactivated successfully.
Jan 29 11:51:22 np0005601226 podman[97092]: 2026-01-29 16:51:22.319650359 +0000 UTC m=+0.341847304 container remove fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_burnell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v79: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:22 np0005601226 systemd[1]: libpod-conmon-fa0d3f9e776dad0126fa06919c1a1e1395ea822d3cf92e93d4c6ac9b34c2d229.scope: Deactivated successfully.
Jan 29 11:51:22 np0005601226 podman[97135]: 2026-01-29 16:51:22.484373015 +0000 UTC m=+0.074508133 container create 64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 11:51:22 np0005601226 podman[97135]: 2026-01-29 16:51:22.449007467 +0000 UTC m=+0.039142655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:22 np0005601226 systemd[1]: Started libpod-conmon-64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb.scope.
Jan 29 11:51:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cb6acdcaadb96d6ef5d6f21a5cfd6b20506b1cf85456b32e5e7abcaff0473c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cb6acdcaadb96d6ef5d6f21a5cfd6b20506b1cf85456b32e5e7abcaff0473c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cb6acdcaadb96d6ef5d6f21a5cfd6b20506b1cf85456b32e5e7abcaff0473c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cb6acdcaadb96d6ef5d6f21a5cfd6b20506b1cf85456b32e5e7abcaff0473c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77cb6acdcaadb96d6ef5d6f21a5cfd6b20506b1cf85456b32e5e7abcaff0473c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:22 np0005601226 podman[97135]: 2026-01-29 16:51:22.687861425 +0000 UTC m=+0.277996553 container init 64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 11:51:22 np0005601226 podman[97135]: 2026-01-29 16:51:22.7022115 +0000 UTC m=+0.292346608 container start 64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 11:51:22 np0005601226 podman[97135]: 2026-01-29 16:51:22.73413158 +0000 UTC m=+0.324266708 container attach 64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_brown, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 11:51:22 np0005601226 python3[97179]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:22 np0005601226 podman[97182]: 2026-01-29 16:51:22.860112764 +0000 UTC m=+0.066878228 container create ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75 (image=quay.io/ceph/ceph:v20, name=fervent_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:22 np0005601226 podman[97182]: 2026-01-29 16:51:22.811593245 +0000 UTC m=+0.018358699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:22 np0005601226 systemd[1]: Started libpod-conmon-ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75.scope.
Jan 29 11:51:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01509cde19adcc1991189c52c039a9ada04ec8183e4c6a0b94ad2672acd23713/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01509cde19adcc1991189c52c039a9ada04ec8183e4c6a0b94ad2672acd23713/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 29 11:51:22 np0005601226 podman[97182]: 2026-01-29 16:51:22.970187679 +0000 UTC m=+0.176953133 container init ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75 (image=quay.io/ceph/ceph:v20, name=fervent_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:51:22 np0005601226 podman[97182]: 2026-01-29 16:51:22.974906252 +0000 UTC m=+0.181671696 container start ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75 (image=quay.io/ceph/ceph:v20, name=fervent_yonath, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 29 11:51:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 29 11:51:23 np0005601226 podman[97182]: 2026-01-29 16:51:23.026300991 +0000 UTC m=+0.233066455 container attach ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75 (image=quay.io/ceph/ceph:v20, name=fervent_yonath, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 29 11:51:23 np0005601226 strange_brown[97164]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:51:23 np0005601226 strange_brown[97164]: --> All data devices are unavailable
Jan 29 11:51:23 np0005601226 systemd[1]: libpod-64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb.scope: Deactivated successfully.
Jan 29 11:51:23 np0005601226 podman[97135]: 2026-01-29 16:51:23.141978405 +0000 UTC m=+0.732113513 container died 64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_brown, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-77cb6acdcaadb96d6ef5d6f21a5cfd6b20506b1cf85456b32e5e7abcaff0473c-merged.mount: Deactivated successfully.
Jan 29 11:51:23 np0005601226 podman[97135]: 2026-01-29 16:51:23.210324463 +0000 UTC m=+0.800459571 container remove 64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:23 np0005601226 systemd[1]: libpod-conmon-64ff69063e071536aefe57e9a807a1efb809349a4a14efcf8531c2bdc8139cfb.scope: Deactivated successfully.
Jan 29 11:51:23 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gtpysq", "name": "rgw_frontends"} v 0)
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gtpysq", "name": "rgw_frontends"} : dispatch
Jan 29 11:51:23 np0005601226 fervent_yonath[97203]: 
Jan 29 11:51:23 np0005601226 fervent_yonath[97203]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 29 11:51:23 np0005601226 systemd[1]: libpod-ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75.scope: Deactivated successfully.
Jan 29 11:51:23 np0005601226 podman[97182]: 2026-01-29 16:51:23.421290314 +0000 UTC m=+0.628055758 container died ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75 (image=quay.io/ceph/ceph:v20, name=fervent_yonath, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 29 11:51:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-01509cde19adcc1991189c52c039a9ada04ec8183e4c6a0b94ad2672acd23713-merged.mount: Deactivated successfully.
Jan 29 11:51:23 np0005601226 podman[97182]: 2026-01-29 16:51:23.520863993 +0000 UTC m=+0.727629457 container remove ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75 (image=quay.io/ceph/ceph:v20, name=fervent_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 11:51:23 np0005601226 systemd[1]: libpod-conmon-ddd3b1991c4a2fd847917814765fb172fc8889c7c6de1ac59b2c5de11fad6b75.scope: Deactivated successfully.
Jan 29 11:51:23 np0005601226 podman[97326]: 2026-01-29 16:51:23.578316493 +0000 UTC m=+0.039813934 container create 313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 11:51:23 np0005601226 systemd[1]: Started libpod-conmon-313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524.scope.
Jan 29 11:51:23 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:23 np0005601226 podman[97326]: 2026-01-29 16:51:23.55942849 +0000 UTC m=+0.020925921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:23 np0005601226 podman[97326]: 2026-01-29 16:51:23.685101645 +0000 UTC m=+0.146599086 container init 313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 29 11:51:23 np0005601226 podman[97326]: 2026-01-29 16:51:23.695711825 +0000 UTC m=+0.157209266 container start 313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 11:51:23 np0005601226 musing_engelbart[97343]: 167 167
Jan 29 11:51:23 np0005601226 systemd[1]: libpod-313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524.scope: Deactivated successfully.
Jan 29 11:51:23 np0005601226 podman[97326]: 2026-01-29 16:51:23.715239535 +0000 UTC m=+0.176737006 container attach 313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:23 np0005601226 podman[97326]: 2026-01-29 16:51:23.715709589 +0000 UTC m=+0.177207040 container died 313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 11:51:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ca719e10a5664528b97a7feb95469064d77d660076af2ccdfa4d5357980ccc0b-merged.mount: Deactivated successfully.
Jan 29 11:51:23 np0005601226 ansible-async_wrapper.py[95714]: Done in kid B.
Jan 29 11:51:23 np0005601226 podman[97326]: 2026-01-29 16:51:23.880131116 +0000 UTC m=+0.341628547 container remove 313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 11:51:23 np0005601226 systemd[1]: libpod-conmon-313295539eaa97045325c20f221ca039f03b42a5d3c2db4db41cee39e28af524.scope: Deactivated successfully.
Jan 29 11:51:23 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 29 11:51:23 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 29 11:51:24 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:24 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 29 11:51:24 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 29 11:51:24 np0005601226 podman[97368]: 2026-01-29 16:51:24.054605188 +0000 UTC m=+0.047829470 container create 764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:24 np0005601226 systemd[1]: Started libpod-conmon-764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60.scope.
Jan 29 11:51:24 np0005601226 podman[97368]: 2026-01-29 16:51:24.031052294 +0000 UTC m=+0.024276616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:24 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2829ec96c12ad379545f202814229d9a6cf0846d61b3658571542b5f2c22b37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2829ec96c12ad379545f202814229d9a6cf0846d61b3658571542b5f2c22b37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2829ec96c12ad379545f202814229d9a6cf0846d61b3658571542b5f2c22b37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2829ec96c12ad379545f202814229d9a6cf0846d61b3658571542b5f2c22b37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:24 np0005601226 podman[97368]: 2026-01-29 16:51:24.171010122 +0000 UTC m=+0.164234424 container init 764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 11:51:24 np0005601226 podman[97368]: 2026-01-29 16:51:24.176562528 +0000 UTC m=+0.169786810 container start 764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:24 np0005601226 podman[97368]: 2026-01-29 16:51:24.180327855 +0000 UTC m=+0.173552137 container attach 764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:51:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v82: 10 pgs: 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]: {
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:    "0": [
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:        {
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "devices": [
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "/dev/loop3"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            ],
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_name": "ceph_lv0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_size": "21470642176",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "name": "ceph_lv0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "tags": {
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.crush_device_class": "",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.encrypted": "0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osd_id": "0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.type": "block",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.vdo": "0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.with_tpm": "0"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            },
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "type": "block",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "vg_name": "ceph_vg0"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:        }
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:    ],
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:    "1": [
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:        {
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "devices": [
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "/dev/loop4"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            ],
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_name": "ceph_lv1",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_size": "21470642176",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "name": "ceph_lv1",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "tags": {
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.crush_device_class": "",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.encrypted": "0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osd_id": "1",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.type": "block",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.vdo": "0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.with_tpm": "0"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            },
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "type": "block",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "vg_name": "ceph_vg1"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:        }
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:    ],
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:    "2": [
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:        {
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "devices": [
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "/dev/loop5"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            ],
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_name": "ceph_lv2",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_size": "21470642176",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "name": "ceph_lv2",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "tags": {
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.crush_device_class": "",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.encrypted": "0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osd_id": "2",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.type": "block",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.vdo": "0",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:                "ceph.with_tpm": "0"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            },
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "type": "block",
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:            "vg_name": "ceph_vg2"
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:        }
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]:    ]
Jan 29 11:51:24 np0005601226 intelligent_jemison[97386]: }
Jan 29 11:51:24 np0005601226 python3[97416]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:24 np0005601226 systemd[1]: libpod-764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60.scope: Deactivated successfully.
Jan 29 11:51:24 np0005601226 podman[97368]: 2026-01-29 16:51:24.458840411 +0000 UTC m=+0.452064733 container died 764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:24 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d2829ec96c12ad379545f202814229d9a6cf0846d61b3658571542b5f2c22b37-merged.mount: Deactivated successfully.
Jan 29 11:51:24 np0005601226 podman[97421]: 2026-01-29 16:51:24.514980655 +0000 UTC m=+0.049797366 container create 4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa (image=quay.io/ceph/ceph:v20, name=sweet_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:24 np0005601226 podman[97368]: 2026-01-29 16:51:24.528551327 +0000 UTC m=+0.521775609 container remove 764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 29 11:51:24 np0005601226 systemd[1]: libpod-conmon-764fa3abd8951eb776f515ecb1d9876ee74aab97dc53acb6e951b02048438e60.scope: Deactivated successfully.
Jan 29 11:51:24 np0005601226 systemd[1]: Started libpod-conmon-4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa.scope.
Jan 29 11:51:24 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b4f77672456a09e67b33b7c1e8bb46af58fd2db6884e23597dad6d0663d41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b4f77672456a09e67b33b7c1e8bb46af58fd2db6884e23597dad6d0663d41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:24 np0005601226 podman[97421]: 2026-01-29 16:51:24.487765386 +0000 UTC m=+0.022582387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:24 np0005601226 podman[97421]: 2026-01-29 16:51:24.607455693 +0000 UTC m=+0.142272454 container init 4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa (image=quay.io/ceph/ceph:v20, name=sweet_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 11:51:24 np0005601226 podman[97421]: 2026-01-29 16:51:24.612904287 +0000 UTC m=+0.147720998 container start 4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa (image=quay.io/ceph/ceph:v20, name=sweet_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 11:51:24 np0005601226 podman[97421]: 2026-01-29 16:51:24.663571976 +0000 UTC m=+0.198388687 container attach 4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa (image=quay.io/ceph/ceph:v20, name=sweet_feynman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 11:51:24 np0005601226 podman[97531]: 2026-01-29 16:51:24.931091382 +0000 UTC m=+0.043554339 container create 9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:24 np0005601226 systemd[1]: Started libpod-conmon-9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632.scope.
Jan 29 11:51:24 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 29 11:51:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 29 11:51:24 np0005601226 sweet_feynman[97445]: 
Jan 29 11:51:24 np0005601226 sweet_feynman[97445]: [{"container_id": "70a89fc4c7fa", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.20%", "created": "2026-01-29T16:49:59.672325Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-29T16:49:59.808148Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T16:51:21.389863Z", "memory_usage": 7808745, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-29T16:49:59.315991Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@crash.compute-0", "version": "20.2.0"}, {"container_id": "469d57b85c2b", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "6.77%", "created": "2026-01-29T16:51:19.993605Z", "daemon_id": "cephfs.compute-0.cflubi", "daemon_name": "mds.cephfs.compute-0.cflubi", "daemon_type": "mds", "events": ["2026-01-29T16:51:20.060662Z daemon:mds.cephfs.compute-0.cflubi [INFO] \"Deployed mds.cephfs.compute-0.cflubi on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T16:51:21.390274Z", "memory_usage": 16095641, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-29T16:51:19.863997Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mds.cephfs.compute-0.cflubi", "version": "20.2.0"}, {"container_id": "931753d3ff18", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.67%", "created": "2026-01-29T16:49:18.406090Z", "daemon_id": "compute-0.zvopdr", "daemon_name": "mgr.compute-0.zvopdr", "daemon_type": "mgr", "events": ["2026-01-29T16:50:04.771397Z daemon:mgr.compute-0.zvopdr [INFO] \"Reconfigured mgr.compute-0.zvopdr on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T16:51:21.389788Z", "memory_usage": 547251814, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-29T16:49:17.983594Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mgr.compute-0.zvopdr", "version": "20.2.0"}, {"container_id": "79fb58d438a0", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.80%", "created": "2026-01-29T16:49:11.489910Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-29T16:50:03.601953Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T16:51:21.389650Z", "memory_request": 2147483648, "memory_usage": 41827696, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-29T16:49:14.643860Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@mon.compute-0", "version": "20.2.0"}, {"container_id": "a3a212aa0fc1", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.75%", "created": "2026-01-29T16:50:26.766142Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-29T16:50:26.837024Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T16:51:21.389934Z", "memory_request": 4294967296, "memory_usage": 58395197, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-29T16:50:26.660384Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@osd.0", "version": "20.2.0"}, {"container_id": "5904e6a7a5f4", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.87%", "created": "2026-01-29T16:50:31.157263Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-29T16:50:31.336422Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T16:51:21.390001Z", "memory_request": 4294967296, "memory_usage": 61677240, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-29T16:50:30.987178Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@osd.1", "version": "20.2.0"}, {"container_id": "238789ad6244", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.98%", "created": "2026-01-29T16:50:35.454381Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-29T16:50:35.583644Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-29T16:51:21.390078Z", "memory_request": 4294967296, "memory_usage": 56958648, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-29T16:50:35.207182Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a@osd.2", "version": "20.2.0"}, {"container_id": "5c717c91b2db", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Jan 29 11:51:24 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:24 np0005601226 systemd[1]: libpod-4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa.scope: Deactivated successfully.
Jan 29 11:51:24 np0005601226 conmon[97445]: conmon 4065611a5c89ce4999d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa.scope/container/memory.events
Jan 29 11:51:25 np0005601226 podman[97531]: 2026-01-29 16:51:24.909830762 +0000 UTC m=+0.022293809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 29 11:51:25 np0005601226 podman[97531]: 2026-01-29 16:51:25.060149422 +0000 UTC m=+0.172612459 container init 9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 11:51:25 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 29 11:51:25 np0005601226 podman[97531]: 2026-01-29 16:51:25.071238245 +0000 UTC m=+0.183701202 container start 9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 29 11:51:25 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:25 np0005601226 nifty_austin[97547]: 167 167
Jan 29 11:51:25 np0005601226 systemd[1]: libpod-9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632.scope: Deactivated successfully.
Jan 29 11:51:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 29 11:51:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 29 11:51:25 np0005601226 podman[97531]: 2026-01-29 16:51:25.144744648 +0000 UTC m=+0.257207695 container attach 9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:25 np0005601226 podman[97531]: 2026-01-29 16:51:25.147834326 +0000 UTC m=+0.260297323 container died 9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 11:51:25 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ed9c2d4866d31653b1ab00d1d3fb795b13dc6fb5a357667c23fe326af8cf9687-merged.mount: Deactivated successfully.
Jan 29 11:51:25 np0005601226 rsyslogd[1007]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "70a89fc4c7fa", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 29 11:51:25 np0005601226 podman[97531]: 2026-01-29 16:51:25.34193505 +0000 UTC m=+0.454398007 container remove 9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:25 np0005601226 systemd[1]: libpod-conmon-9e5dfddd34b1ea9054e2ef5b7522cbd7224d74ed8066726f93b98249b5f88632.scope: Deactivated successfully.
Jan 29 11:51:25 np0005601226 podman[97421]: 2026-01-29 16:51:25.426923198 +0000 UTC m=+0.961739909 container died 4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa (image=quay.io/ceph/ceph:v20, name=sweet_feynman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:25 np0005601226 podman[97587]: 2026-01-29 16:51:25.561145024 +0000 UTC m=+0.091439050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:25 np0005601226 systemd[1]: var-lib-containers-storage-overlay-879b4f77672456a09e67b33b7c1e8bb46af58fd2db6884e23597dad6d0663d41-merged.mount: Deactivated successfully.
Jan 29 11:51:25 np0005601226 podman[97552]: 2026-01-29 16:51:25.716300131 +0000 UTC m=+0.712404196 container remove 4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa (image=quay.io/ceph/ceph:v20, name=sweet_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 11:51:25 np0005601226 ceph-mds[96568]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 29 11:51:25 np0005601226 systemd[1]: libpod-conmon-4065611a5c89ce4999d41bee873c20a3e81e69e487e51481a1fa9923bf14bfaa.scope: Deactivated successfully.
Jan 29 11:51:25 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mds-cephfs-compute-0-cflubi[96540]: 2026-01-29T16:51:25.717+0000 7f3dca3bf640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 29 11:51:25 np0005601226 podman[97587]: 2026-01-29 16:51:25.771821237 +0000 UTC m=+0.302115283 container create d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cohen, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:25 np0005601226 systemd[1]: Started libpod-conmon-d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9.scope.
Jan 29 11:51:25 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9bb8d2bbaa8a6b65bb3ea88d8a3f5ff1fc6af03c568a72334f5be0855b86ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9bb8d2bbaa8a6b65bb3ea88d8a3f5ff1fc6af03c568a72334f5be0855b86ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9bb8d2bbaa8a6b65bb3ea88d8a3f5ff1fc6af03c568a72334f5be0855b86ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9bb8d2bbaa8a6b65bb3ea88d8a3f5ff1fc6af03c568a72334f5be0855b86ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:25 np0005601226 podman[97587]: 2026-01-29 16:51:25.920925683 +0000 UTC m=+0.451219779 container init d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cohen, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:25 np0005601226 podman[97587]: 2026-01-29 16:51:25.93006491 +0000 UTC m=+0.460358936 container start d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:25 np0005601226 podman[97587]: 2026-01-29 16:51:25.936661017 +0000 UTC m=+0.466955033 container attach d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 29 11:51:26 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 29 11:51:26 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 29 11:51:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Jan 29 11:51:26 np0005601226 lvm[97709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:51:26 np0005601226 lvm[97710]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:51:26 np0005601226 lvm[97709]: VG ceph_vg0 finished
Jan 29 11:51:26 np0005601226 lvm[97710]: VG ceph_vg1 finished
Jan 29 11:51:26 np0005601226 lvm[97712]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:51:26 np0005601226 lvm[97712]: VG ceph_vg2 finished
Jan 29 11:51:26 np0005601226 python3[97701]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:26 np0005601226 podman[97713]: 2026-01-29 16:51:26.655884285 +0000 UTC m=+0.058560123 container create a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203 (image=quay.io/ceph/ceph:v20, name=heuristic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:51:26 np0005601226 pensive_cohen[97605]: {}
Jan 29 11:51:26 np0005601226 systemd[1]: libpod-d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9.scope: Deactivated successfully.
Jan 29 11:51:26 np0005601226 systemd[1]: libpod-d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9.scope: Consumed 1.018s CPU time.
Jan 29 11:51:26 np0005601226 podman[97713]: 2026-01-29 16:51:26.620155147 +0000 UTC m=+0.022831035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:26 np0005601226 podman[97587]: 2026-01-29 16:51:26.736113498 +0000 UTC m=+1.266407514 container died d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:51:26 np0005601226 systemd[1]: Started libpod-conmon-a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203.scope.
Jan 29 11:51:26 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389e83bb56aa08a612394361d7d7d320db363b7456974b726befb2706ba929e9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389e83bb56aa08a612394361d7d7d320db363b7456974b726befb2706ba929e9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:26 np0005601226 podman[97713]: 2026-01-29 16:51:26.8149045 +0000 UTC m=+0.217580328 container init a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203 (image=quay.io/ceph/ceph:v20, name=heuristic_elgamal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:26 np0005601226 podman[97713]: 2026-01-29 16:51:26.821702962 +0000 UTC m=+0.224378760 container start a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203 (image=quay.io/ceph/ceph:v20, name=heuristic_elgamal, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:26 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4b9bb8d2bbaa8a6b65bb3ea88d8a3f5ff1fc6af03c568a72334f5be0855b86ba-merged.mount: Deactivated successfully.
Jan 29 11:51:26 np0005601226 podman[97713]: 2026-01-29 16:51:26.907530183 +0000 UTC m=+0.310205991 container attach a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203 (image=quay.io/ceph/ceph:v20, name=heuristic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 11:51:27 np0005601226 podman[97728]: 2026-01-29 16:51:27.006963097 +0000 UTC m=+0.295168306 container remove d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cohen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:27 np0005601226 systemd[1]: libpod-conmon-d5b0253e411f2c3fb5ab266472f78921c732f0926498df0f046a9643e80a19c9.scope: Deactivated successfully.
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 29 11:51:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3526070845' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 29 11:51:27 np0005601226 heuristic_elgamal[97742]: 
Jan 29 11:51:27 np0005601226 heuristic_elgamal[97742]: {"fsid":"cc5c72e3-31e0-58b9-8731-456117d38f4a","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":132,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1769705442,"num_in_osds":3,"osd_in_since":1769705419,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9},{"state_name":"unknown","count":2}],"num_pgs":11,"num_pools":11,"num_objects":30,"data_bytes":463390,"bytes_used":84111360,"bytes_avail":64327815168,"bytes_total":64411926528,"unknown_pgs_ratio":0.18181818723678589,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-01-29T16:51:21:742369+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.cflubi","status":"up:active","gid":14256}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-29T16:50:40.323321+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 29 11:51:27 np0005601226 systemd[1]: libpod-a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203.scope: Deactivated successfully.
Jan 29 11:51:27 np0005601226 podman[97713]: 2026-01-29 16:51:27.346502896 +0000 UTC m=+0.749178694 container died a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203 (image=quay.io/ceph/ceph:v20, name=heuristic_elgamal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:51:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay-389e83bb56aa08a612394361d7d7d320db363b7456974b726befb2706ba929e9-merged.mount: Deactivated successfully.
Jan 29 11:51:27 np0005601226 podman[97713]: 2026-01-29 16:51:27.42568564 +0000 UTC m=+0.828361438 container remove a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203 (image=quay.io/ceph/ceph:v20, name=heuristic_elgamal, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:27 np0005601226 systemd[1]: libpod-conmon-a5e3312d052bd29febd3daa908be164c111f2ffaa85f712cad5094136f6f3203.scope: Deactivated successfully.
Jan 29 11:51:27 np0005601226 radosgw[95453]: v1 topic migration: starting v1 topic migration..
Jan 29 11:51:27 np0005601226 radosgw[95453]: v1 topic migration: finished v1 topic migration
Jan 29 11:51:27 np0005601226 radosgw[95453]: framework: beast
Jan 29 11:51:27 np0005601226 radosgw[95453]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 29 11:51:27 np0005601226 radosgw[95453]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 29 11:51:27 np0005601226 radosgw[95453]: starting handler: beast
Jan 29 11:51:27 np0005601226 radosgw[95453]: set uid:gid to 167:167 (ceph:ceph)
Jan 29 11:51:27 np0005601226 radosgw[95453]: mgrc service_daemon_register rgw.14254 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.gtpysq,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=b1fe70e7-7c89-470c-917b-01721d902e77,zone_name=default,zonegroup_id=c9f71918-3132-456f-ab1a-0d98da1c6d56,zonegroup_name=default}
Jan 29 11:51:27 np0005601226 podman[97937]: 2026-01-29 16:51:27.76213608 +0000 UTC m=+0.068880884 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:27 np0005601226 podman[97937]: 2026-01-29 16:51:27.844137653 +0000 UTC m=+0.150882497 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: from='client.? 192.168.122.100:0/211546978' entity='client.rgw.rgw.compute-0.gtpysq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:28 np0005601226 python3[98064]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s wr, 10 op/s
Jan 29 11:51:28 np0005601226 podman[98107]: 2026-01-29 16:51:28.355395515 +0000 UTC m=+0.088726865 container create 1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18 (image=quay.io/ceph/ceph:v20, name=amazing_gauss, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 11:51:28 np0005601226 podman[98107]: 2026-01-29 16:51:28.291229365 +0000 UTC m=+0.024560735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:28 np0005601226 systemd[1]: Started libpod-conmon-1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18.scope.
Jan 29 11:51:28 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:28 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9d1fff6aa2debf3b9b3a455e0fc98b6784c113a57dc6c8ce0caada20990a39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:28 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb9d1fff6aa2debf3b9b3a455e0fc98b6784c113a57dc6c8ce0caada20990a39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:28 np0005601226 podman[98107]: 2026-01-29 16:51:28.464500412 +0000 UTC m=+0.197831802 container init 1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18 (image=quay.io/ceph/ceph:v20, name=amazing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:28 np0005601226 podman[98107]: 2026-01-29 16:51:28.470479881 +0000 UTC m=+0.203811241 container start 1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18 (image=quay.io/ceph/ceph:v20, name=amazing_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:51:28 np0005601226 podman[98107]: 2026-01-29 16:51:28.480930645 +0000 UTC m=+0.214261985 container attach 1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18 (image=quay.io/ceph/ceph:v20, name=amazing_gauss, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 29 11:51:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1925329975' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 29 11:51:28 np0005601226 amazing_gauss[98142]: 
Jan 29 11:51:28 np0005601226 amazing_gauss[98142]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.gtpysq","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 29 11:51:28 np0005601226 systemd[1]: libpod-1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18.scope: Deactivated successfully.
Jan 29 11:51:28 np0005601226 podman[98107]: 2026-01-29 16:51:28.874068855 +0000 UTC m=+0.607400195 container died 1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18 (image=quay.io/ceph/ceph:v20, name=amazing_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:28 np0005601226 systemd[1]: var-lib-containers-storage-overlay-eb9d1fff6aa2debf3b9b3a455e0fc98b6784c113a57dc6c8ce0caada20990a39-merged.mount: Deactivated successfully.
Jan 29 11:51:28 np0005601226 podman[98107]: 2026-01-29 16:51:28.910332758 +0000 UTC m=+0.643664098 container remove 1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18 (image=quay.io/ceph/ceph:v20, name=amazing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:28 np0005601226 systemd[1]: libpod-conmon-1e26afa4327c35ba22c87c1adf6a04ad9c2a2507e526699c9453b09bfde0fd18.scope: Deactivated successfully.
Jan 29 11:51:29 np0005601226 podman[98256]: 2026-01-29 16:51:29.01107623 +0000 UTC m=+0.072852646 container create e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bassi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 11:51:29 np0005601226 systemd[1]: Started libpod-conmon-e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0.scope.
Jan 29 11:51:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:29 np0005601226 podman[98256]: 2026-01-29 16:51:28.961371647 +0000 UTC m=+0.023148083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:29 np0005601226 podman[98256]: 2026-01-29 16:51:29.064102426 +0000 UTC m=+0.125878862 container init e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bassi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:29 np0005601226 podman[98256]: 2026-01-29 16:51:29.068838059 +0000 UTC m=+0.130614495 container start e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bassi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:29 np0005601226 angry_bassi[98272]: 167 167
Jan 29 11:51:29 np0005601226 systemd[1]: libpod-e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0.scope: Deactivated successfully.
Jan 29 11:51:29 np0005601226 podman[98256]: 2026-01-29 16:51:29.072812971 +0000 UTC m=+0.134589387 container attach e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 11:51:29 np0005601226 podman[98256]: 2026-01-29 16:51:29.073544741 +0000 UTC m=+0.135321157 container died e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bassi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 11:51:29 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e56135a50d2edfd172b8c84abb1ae25577a2c24ed6c810c5808737686b968c05-merged.mount: Deactivated successfully.
Jan 29 11:51:29 np0005601226 podman[98256]: 2026-01-29 16:51:29.111161883 +0000 UTC m=+0.172938319 container remove e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:29 np0005601226 systemd[1]: libpod-conmon-e5637d485809a6a5d36c37d2283cc867999c70c7c76c0135843123b4336d57a0.scope: Deactivated successfully.
Jan 29 11:51:29 np0005601226 podman[98294]: 2026-01-29 16:51:29.216244207 +0000 UTC m=+0.034916466 container create f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_haibt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:29 np0005601226 systemd[1]: Started libpod-conmon-f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5.scope.
Jan 29 11:51:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a532e8be6648cae0994b4c01f6a27a0de8b49cb2d5c9fe8d41703ec93238f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a532e8be6648cae0994b4c01f6a27a0de8b49cb2d5c9fe8d41703ec93238f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a532e8be6648cae0994b4c01f6a27a0de8b49cb2d5c9fe8d41703ec93238f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a532e8be6648cae0994b4c01f6a27a0de8b49cb2d5c9fe8d41703ec93238f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a532e8be6648cae0994b4c01f6a27a0de8b49cb2d5c9fe8d41703ec93238f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:29 np0005601226 podman[98294]: 2026-01-29 16:51:29.285995025 +0000 UTC m=+0.104667294 container init f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_haibt, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:29 np0005601226 podman[98294]: 2026-01-29 16:51:29.294822423 +0000 UTC m=+0.113494642 container start f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 11:51:29 np0005601226 podman[98294]: 2026-01-29 16:51:29.200244316 +0000 UTC m=+0.018916825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:29 np0005601226 podman[98294]: 2026-01-29 16:51:29.30038701 +0000 UTC m=+0.119059279 container attach f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:29 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:29 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:29 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:51:29 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:29 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:51:29 np0005601226 fervent_haibt[98308]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:51:29 np0005601226 fervent_haibt[98308]: --> All data devices are unavailable
Jan 29 11:51:29 np0005601226 systemd[1]: libpod-f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5.scope: Deactivated successfully.
Jan 29 11:51:29 np0005601226 podman[98294]: 2026-01-29 16:51:29.688139648 +0000 UTC m=+0.506811877 container died f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_haibt, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:29 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cf7a532e8be6648cae0994b4c01f6a27a0de8b49cb2d5c9fe8d41703ec93238f-merged.mount: Deactivated successfully.
Jan 29 11:51:29 np0005601226 podman[98294]: 2026-01-29 16:51:29.730839173 +0000 UTC m=+0.549511382 container remove f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_haibt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:29 np0005601226 systemd[1]: libpod-conmon-f09ea6e8af395df01773a1d95b6550c407e250ade4e06ffaf462c01c336759d5.scope: Deactivated successfully.
Jan 29 11:51:29 np0005601226 python3[98350]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:29 np0005601226 podman[98373]: 2026-01-29 16:51:29.838735206 +0000 UTC m=+0.043762085 container create 58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b (image=quay.io/ceph/ceph:v20, name=kind_cray, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:29 np0005601226 systemd[1]: Started libpod-conmon-58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b.scope.
Jan 29 11:51:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a429c21686673d880711d6a64b2674ef53f55315b1cb25b76200fd5c6bc2edf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a429c21686673d880711d6a64b2674ef53f55315b1cb25b76200fd5c6bc2edf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:29 np0005601226 podman[98373]: 2026-01-29 16:51:29.90973838 +0000 UTC m=+0.114765279 container init 58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b (image=quay.io/ceph/ceph:v20, name=kind_cray, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 11:51:29 np0005601226 podman[98373]: 2026-01-29 16:51:29.820637516 +0000 UTC m=+0.025664435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:29 np0005601226 podman[98373]: 2026-01-29 16:51:29.917524629 +0000 UTC m=+0.122551528 container start 58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b (image=quay.io/ceph/ceph:v20, name=kind_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:29 np0005601226 podman[98373]: 2026-01-29 16:51:29.922323904 +0000 UTC m=+0.127350783 container attach 58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b (image=quay.io/ceph/ceph:v20, name=kind_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:51:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:30 np0005601226 podman[98466]: 2026-01-29 16:51:30.093593975 +0000 UTC m=+0.033188417 container create 1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_meitner, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:30 np0005601226 systemd[1]: Started libpod-conmon-1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851.scope.
Jan 29 11:51:30 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:30 np0005601226 podman[98466]: 2026-01-29 16:51:30.145122319 +0000 UTC m=+0.084716781 container init 1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_meitner, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:51:30 np0005601226 podman[98466]: 2026-01-29 16:51:30.150430449 +0000 UTC m=+0.090024891 container start 1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:51:30 np0005601226 hungry_meitner[98482]: 167 167
Jan 29 11:51:30 np0005601226 podman[98466]: 2026-01-29 16:51:30.154143033 +0000 UTC m=+0.093737505 container attach 1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 29 11:51:30 np0005601226 systemd[1]: libpod-1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851.scope: Deactivated successfully.
Jan 29 11:51:30 np0005601226 podman[98466]: 2026-01-29 16:51:30.154813963 +0000 UTC m=+0.094408415 container died 1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 11:51:30 np0005601226 podman[98466]: 2026-01-29 16:51:30.076604206 +0000 UTC m=+0.016198678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:30 np0005601226 systemd[1]: var-lib-containers-storage-overlay-93b9d59610b71911f3d9126584bc27d62fec66d0449337235bf4dff856bbe42f-merged.mount: Deactivated successfully.
Jan 29 11:51:30 np0005601226 podman[98466]: 2026-01-29 16:51:30.202279402 +0000 UTC m=+0.141873844 container remove 1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 11:51:30 np0005601226 systemd[1]: libpod-conmon-1f8acbf8d319187d5385b6d1450273633099f3260adc5b3bd74ca5d90453a851.scope: Deactivated successfully.
Jan 29 11:51:30 np0005601226 podman[98505]: 2026-01-29 16:51:30.312264643 +0000 UTC m=+0.038326331 container create 29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:51:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 11 KiB/s wr, 229 op/s
Jan 29 11:51:30 np0005601226 systemd[1]: Started libpod-conmon-29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3.scope.
Jan 29 11:51:30 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25ffcfa0ab37a93c8f62954437a7c7777c6a12c0b613787aba93e8c7abd903f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25ffcfa0ab37a93c8f62954437a7c7777c6a12c0b613787aba93e8c7abd903f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25ffcfa0ab37a93c8f62954437a7c7777c6a12c0b613787aba93e8c7abd903f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e25ffcfa0ab37a93c8f62954437a7c7777c6a12c0b613787aba93e8c7abd903f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:30 np0005601226 podman[98505]: 2026-01-29 16:51:30.29583211 +0000 UTC m=+0.021893808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:30 np0005601226 podman[98505]: 2026-01-29 16:51:30.401262014 +0000 UTC m=+0.127323712 container init 29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 11:51:30 np0005601226 podman[98505]: 2026-01-29 16:51:30.406017798 +0000 UTC m=+0.132079476 container start 29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:30 np0005601226 podman[98505]: 2026-01-29 16:51:30.409439315 +0000 UTC m=+0.135501003 container attach 29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 29 11:51:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2600568686' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 29 11:51:30 np0005601226 kind_cray[98432]: mimic
Jan 29 11:51:30 np0005601226 systemd[1]: libpod-58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b.scope: Deactivated successfully.
Jan 29 11:51:30 np0005601226 podman[98373]: 2026-01-29 16:51:30.428375779 +0000 UTC m=+0.633402658 container died 58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b (image=quay.io/ceph/ceph:v20, name=kind_cray, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:30 np0005601226 podman[98373]: 2026-01-29 16:51:30.470678632 +0000 UTC m=+0.675705511 container remove 58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b (image=quay.io/ceph/ceph:v20, name=kind_cray, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 11:51:30 np0005601226 systemd[1]: libpod-conmon-58ddb25a86c1944da38476ae87b120a7a798ece6b53c7601025b38dca11c187b.scope: Deactivated successfully.
Jan 29 11:51:30 np0005601226 jovial_moser[98522]: {
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:    "0": [
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:        {
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "devices": [
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "/dev/loop3"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            ],
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_name": "ceph_lv0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_size": "21470642176",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "name": "ceph_lv0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "tags": {
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.crush_device_class": "",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.encrypted": "0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osd_id": "0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.type": "block",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.vdo": "0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.with_tpm": "0"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            },
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "type": "block",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "vg_name": "ceph_vg0"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:        }
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:    ],
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:    "1": [
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:        {
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "devices": [
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "/dev/loop4"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            ],
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_name": "ceph_lv1",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_size": "21470642176",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "name": "ceph_lv1",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "tags": {
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.crush_device_class": "",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.encrypted": "0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osd_id": "1",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.type": "block",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.vdo": "0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.with_tpm": "0"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            },
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "type": "block",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "vg_name": "ceph_vg1"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:        }
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:    ],
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:    "2": [
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:        {
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "devices": [
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "/dev/loop5"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            ],
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_name": "ceph_lv2",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_size": "21470642176",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "name": "ceph_lv2",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "tags": {
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.cluster_name": "ceph",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.crush_device_class": "",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.encrypted": "0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.objectstore": "bluestore",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osd_id": "2",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.type": "block",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.vdo": "0",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:                "ceph.with_tpm": "0"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            },
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "type": "block",
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:            "vg_name": "ceph_vg2"
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:        }
Jan 29 11:51:30 np0005601226 jovial_moser[98522]:    ]
Jan 29 11:51:30 np0005601226 jovial_moser[98522]: }
Jan 29 11:51:30 np0005601226 systemd[1]: libpod-29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3.scope: Deactivated successfully.
Jan 29 11:51:30 np0005601226 podman[98505]: 2026-01-29 16:51:30.668394189 +0000 UTC m=+0.394455897 container died 29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:30 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e25ffcfa0ab37a93c8f62954437a7c7777c6a12c0b613787aba93e8c7abd903f-merged.mount: Deactivated successfully.
Jan 29 11:51:30 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3a429c21686673d880711d6a64b2674ef53f55315b1cb25b76200fd5c6bc2edf-merged.mount: Deactivated successfully.
Jan 29 11:51:30 np0005601226 podman[98505]: 2026-01-29 16:51:30.71879216 +0000 UTC m=+0.444853838 container remove 29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 11:51:30 np0005601226 systemd[1]: libpod-conmon-29ad990f8f2ba0ee3e74794bc70457e73fcfd125fa05ed3f2bea89e686ea42a3.scope: Deactivated successfully.
Jan 29 11:51:31 np0005601226 podman[98616]: 2026-01-29 16:51:31.165448947 +0000 UTC m=+0.098338406 container create 28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dhawan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:31 np0005601226 podman[98616]: 2026-01-29 16:51:31.089729537 +0000 UTC m=+0.022619006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:31 np0005601226 systemd[1]: Started libpod-conmon-28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df.scope.
Jan 29 11:51:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:31 np0005601226 podman[98616]: 2026-01-29 16:51:31.276872255 +0000 UTC m=+0.209761734 container init 28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dhawan, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:31 np0005601226 podman[98616]: 2026-01-29 16:51:31.282961298 +0000 UTC m=+0.215850747 container start 28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:51:31 np0005601226 podman[98616]: 2026-01-29 16:51:31.286668544 +0000 UTC m=+0.219558163 container attach 28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:51:31 np0005601226 strange_dhawan[98646]: 167 167
Jan 29 11:51:31 np0005601226 systemd[1]: libpod-28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df.scope: Deactivated successfully.
Jan 29 11:51:31 np0005601226 podman[98616]: 2026-01-29 16:51:31.287958181 +0000 UTC m=+0.220847630 container died 28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dhawan, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 11:51:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6030cc9415114c6d9c1b56c145590dbe862a768a5297b1220be8c1c5046c3820-merged.mount: Deactivated successfully.
Jan 29 11:51:31 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 11:51:31 np0005601226 podman[98616]: 2026-01-29 16:51:31.325657327 +0000 UTC m=+0.258546776 container remove 28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:51:31 np0005601226 systemd[1]: libpod-conmon-28cf5aacf12e197d082bfce42f22d6954003eefdc18b8f35c3cd6c2dc974d7df.scope: Deactivated successfully.
Jan 29 11:51:31 np0005601226 python3[98661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:51:31 np0005601226 podman[98685]: 2026-01-29 16:51:31.443660282 +0000 UTC m=+0.044193711 container create ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:51:31 np0005601226 podman[98699]: 2026-01-29 16:51:31.477647901 +0000 UTC m=+0.041061052 container create 0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33 (image=quay.io/ceph/ceph:v20, name=blissful_rhodes, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 29 11:51:31 np0005601226 systemd[1]: Started libpod-conmon-ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f.scope.
Jan 29 11:51:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3cd99693d184150c383ed383be5463a4f59336d370c14a683f1e1196816451/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3cd99693d184150c383ed383be5463a4f59336d370c14a683f1e1196816451/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3cd99693d184150c383ed383be5463a4f59336d370c14a683f1e1196816451/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:31 np0005601226 systemd[1]: Started libpod-conmon-0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33.scope.
Jan 29 11:51:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb3cd99693d184150c383ed383be5463a4f59336d370c14a683f1e1196816451/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:31 np0005601226 podman[98685]: 2026-01-29 16:51:31.419815422 +0000 UTC m=+0.020348861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:51:31 np0005601226 podman[98685]: 2026-01-29 16:51:31.528701447 +0000 UTC m=+0.129234896 container init ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 11:51:31 np0005601226 podman[98685]: 2026-01-29 16:51:31.535035337 +0000 UTC m=+0.135568766 container start ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 11:51:31 np0005601226 podman[98685]: 2026-01-29 16:51:31.538025874 +0000 UTC m=+0.138559323 container attach ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 11:51:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:51:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaaefe76c98892de470e93d010d8b7fde6afa2fb50fa7b0ad311caa27f421bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abaaefe76c98892de470e93d010d8b7fde6afa2fb50fa7b0ad311caa27f421bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:51:31 np0005601226 podman[98699]: 2026-01-29 16:51:31.459324608 +0000 UTC m=+0.022737779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:51:31 np0005601226 podman[98699]: 2026-01-29 16:51:31.560410162 +0000 UTC m=+0.123823333 container init 0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33 (image=quay.io/ceph/ceph:v20, name=blissful_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 11:51:31 np0005601226 podman[98699]: 2026-01-29 16:51:31.566257749 +0000 UTC m=+0.129670910 container start 0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33 (image=quay.io/ceph/ceph:v20, name=blissful_rhodes, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 11:51:31 np0005601226 podman[98699]: 2026-01-29 16:51:31.570656504 +0000 UTC m=+0.134069685 container attach 0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33 (image=quay.io/ceph/ceph:v20, name=blissful_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047199199' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 29 11:51:32 np0005601226 blissful_rhodes[98718]: 
Jan 29 11:51:32 np0005601226 blissful_rhodes[98718]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 29 11:51:32 np0005601226 systemd[1]: libpod-0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33.scope: Deactivated successfully.
Jan 29 11:51:32 np0005601226 podman[98699]: 2026-01-29 16:51:32.105769125 +0000 UTC m=+0.669182286 container died 0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33 (image=quay.io/ceph/ceph:v20, name=blissful_rhodes, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:51:32 np0005601226 lvm[98816]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:51:32 np0005601226 lvm[98816]: VG ceph_vg0 finished
Jan 29 11:51:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-abaaefe76c98892de470e93d010d8b7fde6afa2fb50fa7b0ad311caa27f421bd-merged.mount: Deactivated successfully.
Jan 29 11:51:32 np0005601226 lvm[98820]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:51:32 np0005601226 lvm[98820]: VG ceph_vg1 finished
Jan 29 11:51:32 np0005601226 podman[98699]: 2026-01-29 16:51:32.142180624 +0000 UTC m=+0.705593785 container remove 0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33 (image=quay.io/ceph/ceph:v20, name=blissful_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 11:51:32 np0005601226 lvm[98833]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:51:32 np0005601226 lvm[98833]: VG ceph_vg2 finished
Jan 29 11:51:32 np0005601226 systemd[1]: libpod-conmon-0b7bc8bd0a49734933559ae9cfaa130dfe00caa24b6057fb47adbaf70455bf33.scope: Deactivated successfully.
Jan 29 11:51:32 np0005601226 confident_lichterman[98713]: {}
Jan 29 11:51:32 np0005601226 systemd[1]: libpod-ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f.scope: Deactivated successfully.
Jan 29 11:51:32 np0005601226 podman[98685]: 2026-01-29 16:51:32.27249564 +0000 UTC m=+0.873029079 container died ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lichterman, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:51:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-bb3cd99693d184150c383ed383be5463a4f59336d370c14a683f1e1196816451-merged.mount: Deactivated successfully.
Jan 29 11:51:32 np0005601226 podman[98685]: 2026-01-29 16:51:32.309759293 +0000 UTC m=+0.910292712 container remove ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:51:32 np0005601226 systemd[1]: libpod-conmon-ae5fc044950750c00544ec6f16892a8544d89ec9db2dffc6daae713afc08408f.scope: Deactivated successfully.
Jan 29 11:51:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 9.1 KiB/s wr, 189 op/s
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:32 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 8.0 KiB/s wr, 172 op/s
Jan 29 11:51:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 6.6 KiB/s wr, 142 op/s
Jan 29 11:51:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:51:38
Jan 29 11:51:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:51:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:51:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.control', 'volumes', 'vms']
Jan 29 11:51:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 11:51:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 5.9 KiB/s wr, 126 op/s
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.5 KiB/s wr, 118 op/s
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.056086837372107e-07 of space, bias 4.0, pg target 0.001086730420484653 quantized to 16 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 29 11:51:40 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 27c9b0d1-6c8e-4a87-b3af-188ad314394c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 29 11:51:41 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 468204ee-1624-48c8-b9e8-7b8a58c08486 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v96: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 29 11:51:43 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44 pruub=13.996203423s) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active pruub 80.981101990s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:43 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44 pruub=13.996203423s) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown pruub 80.981101990s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:43 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev f672d449-6013-490a-b93f-77a5a4728382 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:43 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44 pruub=14.298649788s) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active pruub 86.353431702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:43 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44 pruub=14.298649788s) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown pruub 86.353431702s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 29 11:51:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v99: 73 pgs: 62 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=19/20 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1e( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 089b28d0-cd64-4e47-808a-d01c023786f1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.c( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.e( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.10( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.12( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=18/19 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=44/45 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=19/19 les/c/f=20/20/0 sis=44) [1] r=0 lpr=44 pi=[19,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1e( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.0( empty local-lis/les=44/45 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.e( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.10( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.12( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.1a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 45 pg[2.14( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=18/18 les/c/f=19/19/0 sis=44) [2] r=0 lpr=44 pi=[18,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 29 11:51:45 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev de5fdfa2-01e1-4ae0-8075-5c2589293c62 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 29 11:51:45 np0005601226 ceph-mgr[75527]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 29 11:51:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v101: 135 pgs: 124 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 29 11:51:46 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 7e2b0fef-b354-4a37-bf47-7412ce9b3aad (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=11.981534958s) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active pruub 91.850341797s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=11.981534958s) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown pruub 91.850341797s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.1e( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.2( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.4( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.3( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.1( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.18( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.17( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.19( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.1b( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.1c( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.1d( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.1a( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.1f( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.5( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.6( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.8( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.7( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.9( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.b( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.a( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.c( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.e( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.d( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.10( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.f( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.16( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.11( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.12( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.15( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.13( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=20/21 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 29 11:51:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 29 11:51:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 29 11:51:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 29 11:51:47 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev f7ab59ce-3141-429a-9e82-7bcfd5f9e5ca (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 29 11:51:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=12.640565872s) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active pruub 84.053581238s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[6.0( v 36'39 (0'0,36'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=48 pruub=14.132340431s) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 35'38 mlcod 35'38 active pruub 94.369903564s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[6.0( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=48 pruub=14.132340431s) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 35'38 mlcod 0'0 unknown pruub 94.369903564s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46 pruub=12.640565872s) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown pruub 84.053581238s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.4( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.5( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.6( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.7( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.8( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.16( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.17( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.18( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.e( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.f( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.10( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.11( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.12( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.13( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.1( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.2( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.14( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.15( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.3( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.1e( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.1f( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.9( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.a( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.b( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.c( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.d( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.1c( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.1d( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.19( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.1a( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 47 pg[5.1b( empty local-lis/les=21/22 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.1f( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.1e( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.1c( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.8( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.7( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.6( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.1b( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.5( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.1a( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.a( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.b( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.9( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.19( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.3( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.4( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.2( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.1( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.c( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.d( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.0( empty local-lis/les=46/48 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.e( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.f( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.11( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.10( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.13( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.12( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.14( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.15( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.16( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.18( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.17( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:47 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 48 pg[4.1d( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v104: 150 pgs: 32 peering, 77 unknown, 41 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 29 11:51:48 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 29 11:51:48 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 29 11:51:48 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev a611f962-e959-4d05-bcb6-1ddc07ba9e53 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 49 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=10.740422249s) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 34'5 mlcod 34'5 active pruub 87.916488647s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 49 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=14.619352341s) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active pruub 91.795837402s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 49 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=14.619352341s) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown pruub 91.795837402s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 49 pg[8.0( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=10.740422249s) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 34'5 mlcod 0'0 unknown pruub 87.916488647s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.5( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.a( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.9( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.4( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.8( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.7( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=22/23 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.2( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.e( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.f( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.c( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.d( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=22/23 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.1f( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.0( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 35'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 49 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=22/22 les/c/f=23/23/0 sis=48) [0] r=0 lpr=48 pi=[22,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.1e( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.1d( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.10( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.15( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.14( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.17( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.16( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.a( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.c( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.9( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=46/49 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.7( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.5( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.4( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.6( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.2( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.3( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.f( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.1( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.e( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.19( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.1b( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.1a( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.18( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.1c( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 49 pg[5.11( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=21/21 les/c/f=22/22/0 sis=46) [2] r=0 lpr=46 pi=[21,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 29 11:51:49 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 29 11:51:49 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 7b035ef6-522b-4aca-92ea-5609d8c3a9ae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 29 11:51:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.12( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.18( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.10( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.17( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.19( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.16( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.14( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.4( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.b( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.5( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.7( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.2( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.d( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.9( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.3( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.8( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.7( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.12( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.13( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1d( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.11( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1e( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.10( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.17( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.19( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.16( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.15( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.14( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.12( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.17( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.19( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.5( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.7( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.d( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.0( empty local-lis/les=49/50 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 34'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.3( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.8( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.7( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.13( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.17( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.19( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.16( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 50 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v107: 212 pgs: 1 active+clean+scrubbing, 32 peering, 62 unknown, 117 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 29 11:51:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 29 11:51:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] update: starting ev 2e468738-d8ce-4798-a38e-5ff2e46e3a25 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 27c9b0d1-6c8e-4a87-b3af-188ad314394c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 27c9b0d1-6c8e-4a87-b3af-188ad314394c (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 468204ee-1624-48c8-b9e8-7b8a58c08486 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 468204ee-1624-48c8-b9e8-7b8a58c08486 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev f672d449-6013-490a-b93f-77a5a4728382 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event f672d449-6013-490a-b93f-77a5a4728382 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 089b28d0-cd64-4e47-808a-d01c023786f1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 089b28d0-cd64-4e47-808a-d01c023786f1 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 7 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev de5fdfa2-01e1-4ae0-8075-5c2589293c62 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event de5fdfa2-01e1-4ae0-8075-5c2589293c62 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 6 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 7e2b0fef-b354-4a37-bf47-7412ce9b3aad (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 7e2b0fef-b354-4a37-bf47-7412ce9b3aad (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev f7ab59ce-3141-429a-9e82-7bcfd5f9e5ca (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event f7ab59ce-3141-429a-9e82-7bcfd5f9e5ca (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev a611f962-e959-4d05-bcb6-1ddc07ba9e53 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event a611f962-e959-4d05-bcb6-1ddc07ba9e53 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 7b035ef6-522b-4aca-92ea-5609d8c3a9ae (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 7b035ef6-522b-4aca-92ea-5609d8c3a9ae (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] complete: finished ev 2e468738-d8ce-4798-a38e-5ff2e46e3a25 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 29 11:51:51 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 2e468738-d8ce-4798-a38e-5ff2e46e3a25 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 51 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=35/36 n=210 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51 pruub=10.425541878s) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 41'482 mlcod 41'482 active pruub 90.204826355s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 51 pg[9.0( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51 pruub=10.425541878s) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 41'482 mlcod 0'0 unknown pruub 90.204826355s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ccaf80 space 0x55f5122c6840 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfa900 space 0x55f5122a2240 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc4400 space 0x55f511ec6540 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bf8180 space 0x55f511f50e40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfe880 space 0x55f5122c7140 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfa500 space 0x55f5122c2b40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ccaa80 space 0x55f512bf0240 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc9800 space 0x55f5123ac840 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cb4600 space 0x55f51239c240 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc9a00 space 0x55f5123a1d40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ce0800 space 0x55f5124ceb40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ccb100 space 0x55f51313e540 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bcd980 space 0x55f512397d40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc9380 space 0x55f5124cd140 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc8d80 space 0x55f5123ada40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bbdc80 space 0x55f511f50540 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfab00 space 0x55f5122a3a40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cb4a00 space 0x55f5124cfd40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc8980 space 0x55f5123e4b40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ce1180 space 0x55f512bf1d40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfbd00 space 0x55f5123e5d40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ba2080 space 0x55f51239cb40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ce8800 space 0x55f5130d5740 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc8f80 space 0x55f5123ad140 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bf8f80 space 0x55f511ec6e40 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bae180 space 0x55f5124cc840 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bcd380 space 0x55f5122a2b40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc4080 space 0x55f512428e40 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bbc500 space 0x55f512464540 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc8f00 space 0x55f5123a8b40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfbd80 space 0x55f512494840 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc8b80 space 0x55f5123e4240 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f51222a080 space 0x55f5122b9140 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bcd300 space 0x55f512396240 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cd3780 space 0x55f512bf0b40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bcc000 space 0x55f51241c840 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512b9c880 space 0x55f5123a0240 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bcda80 space 0x55f512429740 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f51222a180 space 0x55f5122c6240 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfac00 space 0x55f5122c7d40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bae380 space 0x55f5123a0b40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc9180 space 0x55f5123ae240 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bbda00 space 0x55f5122af740 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cc3b00 space 0x55f5130d4e40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cca880 space 0x55f5123abd40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cca480 space 0x55f5122b8b40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bf9e80 space 0x55f51241da40 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ccaa00 space 0x55f51313f740 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfe380 space 0x55f5122b8240 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ba6700 space 0x55f51239d440 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bbc300 space 0x55f5122b9d40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfbe80 space 0x55f5123e5440 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bf9f00 space 0x55f5123ab140 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bbdb80 space 0x55f5122aeb40 0x0~9a clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bc9e80 space 0x55f512bf1440 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ce1000 space 0x55f5124cf440 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512bae280 space 0x55f5124ce240 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512ce1480 space 0x55f5123d0e40 0x0~98 clean)
Jan 29 11:51:51 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55f5131b0b40) split_cache   moving buffer(0x55f512cfa380 space 0x55f5123aeb40 0x0~6e clean)
Jan 29 11:51:51 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 29 11:51:51 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:51 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 29 11:51:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 29 11:51:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 29 11:51:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v110: 274 pgs: 1 active+clean+scrubbing, 32 peering, 124 unknown, 117 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 29 11:51:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.15( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.14( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.17( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.16( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.11( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.10( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.13( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.12( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 51 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=37/38 n=9 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=11.650066376s) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 41'17 mlcod 41'17 active pruub 87.870643616s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.9( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.2( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.8( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.3( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.7( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.6( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.4( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.5( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.18( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.19( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 51 pg[10.0( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=11.650066376s) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 41'17 mlcod 0'0 unknown pruub 87.870643616s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.11( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.1f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.10( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.1e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.1d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.1c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.1b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.12( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.1a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.19( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.18( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.6( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.7( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.5( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.4( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.8( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.3( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.9( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.2( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.13( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.14( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.15( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.16( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 52 pg[10.17( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.10( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.14( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.12( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.2( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 41'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.a( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.4( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.5( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1a( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 52 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 29 11:51:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 29 11:51:53 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 29 11:51:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 41'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:53 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=12.371321678s) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active pruub 94.323585510s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:51:53 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=53 pruub=12.371321678s) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown pruub 94.323585510s@ mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 29 11:51:54 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 29 11:51:54 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 29 11:51:54 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 29 11:51:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 32 peering, 31 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 29 11:51:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 29 11:51:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 29 11:51:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.16( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.13( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.c( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.a( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.5( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.7( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1d( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=39/40 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.16( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.13( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.c( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.a( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.5( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.7( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1d( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=39/39 les/c/f=40/40/0 sis=53) [1] r=0 lpr=53 pi=[39,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:51:54 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 29 11:51:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:51:55 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 29 11:51:55 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 29 11:51:55 np0005601226 ceph-mgr[75527]: [progress INFO root] Writing back 15 completed events
Jan 29 11:51:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 11:51:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:56 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 29 11:51:56 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 29 11:51:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 32 peering, 31 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:51:56 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 29 11:51:56 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 29 11:51:57 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 29 11:51:57 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 29 11:51:58 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 29 11:51:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 31 unknown, 274 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:51:59 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 29 11:51:59 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 29 11:52:00 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 29 11:52:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:52:00 np0005601226 ceph-mgr[75527]: [progress INFO root] Completed event 6fd37ce6-4b78-483b-bea5-d6c54e62e0f0 (Global Recovery Event) in 15 seconds
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:01 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 29 11:52:01 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 29 11:52:01 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 29 11:52:01 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=10.916220665s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.678810120s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=10.916193008s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.678810120s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=10.916092873s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.678848267s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=10.916006088s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.678848267s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.8( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.492326736s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.255226135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.8( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.492300034s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.255226135s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.045379639s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.808578491s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.045363426s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808578491s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.a( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.492019653s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.255271912s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.5( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491914749s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.255279541s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.17( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.766471863s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.065742493s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.17( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.766438484s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.065742493s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.a( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491948128s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.255271912s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.351317406s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.650627136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.5( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491889000s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.255279541s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030610085s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.330032349s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030599594s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.330032349s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.323246956s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.622718811s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.341037750s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.640617371s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1c( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491834641s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.255210876s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.323125839s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.622718811s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.341024399s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.640617371s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030372620s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329994202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030353546s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329994202s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030254364s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329948425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030243874s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329948425s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030189514s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329940796s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.030171394s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329940796s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.15( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.765864372s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.065658569s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1a( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491723061s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.255279541s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.15( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.765853882s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.065658569s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.351267815s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.650627136s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.350317955s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.650840759s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1d( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.350297928s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.650840759s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.447017670s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.747680664s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.447000504s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.747680664s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.14( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.764885902s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.065666199s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.14( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.764867783s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.065666199s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.029128075s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.330032349s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349709511s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.650627136s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.029014587s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329856873s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028840065s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329772949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.029109955s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.330032349s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028805733s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329772949s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028880119s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329856873s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.446568489s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.747650146s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.12( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787922859s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089019775s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1b( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349545479s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.650627136s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.446550369s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.747650146s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.12( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787905693s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089019775s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028450012s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329635620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028429985s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329635620s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.11( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787765503s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089004517s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.11( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787749290s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089004517s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028357506s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329627991s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028346062s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329627991s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.10( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787738800s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089111328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.10( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787726402s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089111328s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.446738243s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748130798s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.446722031s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748130798s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.f( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787691116s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089179993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.f( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787682533s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089179993s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028066635s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329589844s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.028051376s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329589844s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349697113s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651260376s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349704742s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651275635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.7( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349695206s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651275635s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.18( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349671364s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651260376s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027953148s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329605103s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.446058273s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.747734070s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027937889s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329605103s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027944565s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329620361s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.446047783s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.747734070s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027921677s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329620361s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.e( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787247658s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089111328s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027681351s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329566956s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027667046s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329566956s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.e( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787226677s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089111328s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.6( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348827362s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.650726318s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.6( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348806381s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.650726318s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.d( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787127495s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089164734s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027507782s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329574585s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.d( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787111282s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089164734s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027475357s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329574585s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348536491s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.650718689s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027275085s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329483032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.5( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348518372s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.650718689s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.b( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.787007332s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089241028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027256012s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329483032s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.b( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.786977768s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089241028s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.3( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349187851s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651565552s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.445783615s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748176575s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.3( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.349171638s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651565552s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.445768356s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748176575s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027058601s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329505920s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.9( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.786737442s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089218140s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.027036667s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329505920s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026978493s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329475403s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.9( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.786719322s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089218140s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026963234s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329475403s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348195076s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.650726318s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.1( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348176956s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.650726318s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348391533s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651054382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.8( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348376274s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651054382s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.2( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.786565781s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089286804s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.2( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.786548615s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089286804s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026670456s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329444885s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026648521s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329444885s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026659966s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329467773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026642799s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329467773s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348179817s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651046753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.445553780s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748161316s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.a( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.348164558s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651046753s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.445255280s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748161316s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.3( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.786304474s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089271545s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026462555s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329452515s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.3( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.786289215s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089271545s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026445389s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329452515s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.445241928s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748489380s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.444622993s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.747894287s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.445222855s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748489380s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.444604874s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.747894287s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026040077s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329368591s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.026020050s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329368591s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.8( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785892487s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089302063s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025909424s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329338074s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.8( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785874367s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089302063s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025892258s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329338074s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025881767s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329383850s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025859833s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329383850s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025665283s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329330444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025639534s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329330444s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785730362s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089447021s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785711288s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089447021s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.347513199s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651298523s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025486946s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329284668s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.444504738s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748321533s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.9( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.347495079s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651298523s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025471687s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329284668s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.444483757s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748321533s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.4( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785509109s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089416504s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.347348213s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651260376s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025277138s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329185486s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.4( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785487175s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089416504s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1a( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491678238s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.255279541s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.c( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.347334862s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651260376s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025133133s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329154968s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025261879s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329185486s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.044651031s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.808631897s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.025115967s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329154968s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.044608116s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808631897s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024990082s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329109192s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.9( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.508935928s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273017883s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024975777s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329109192s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.9( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.508914948s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273017883s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.444246292s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748428345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024868965s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.329101562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024851799s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.329101562s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1b( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491619110s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.255279541s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.444176674s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748428345s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1b( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.490960121s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.255279541s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.6( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785136223s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089408875s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.6( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.785115242s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089408875s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.4( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.508632660s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273117065s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.346904755s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651268005s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.043913841s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.808609009s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.e( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.346888542s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651268005s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.043841362s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808609009s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.346998215s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651489258s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.043983459s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.808784485s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024621010s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329116821s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.f( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.346980095s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651489258s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.043936729s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808784485s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024604797s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329116821s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1c( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.491599083s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.255210876s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.508105278s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273147583s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.443816185s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 41'483 active pruub 104.748428345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.18( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.784821510s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089469910s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.1( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.508063316s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273147583s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.18( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.784806252s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089469910s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.2( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.507976532s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273124695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024385452s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329086304s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.4( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.508614540s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273117065s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.443782806s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 104.748428345s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.2( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.507940292s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273124695s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.024325371s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329086304s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.7( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.489747047s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.255241394s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.19( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.784655571s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089462280s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.7( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.489706993s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.255241394s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.19( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.784640312s) [0] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089462280s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.041988373s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.808624268s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.041958809s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808624268s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.d( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.506177902s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273231506s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.d( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.506046295s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273231506s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.346447945s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651496887s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.023853302s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.328933716s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.023894310s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.328948975s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.11( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.346411705s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651496887s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.023833275s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.328933716s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.023818970s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.328948975s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.443367004s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748588562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1a( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.784352303s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089576721s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1a( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.784336090s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089576721s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.443348885s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748588562s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.023791313s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329261780s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.023761749s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329261780s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1b( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.784029961s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089546204s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.345961571s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651489258s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1b( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.783987999s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089546204s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.12( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.345922470s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651489258s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.023117065s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.328788757s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.442831039s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748588562s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022996902s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.328788757s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.442807198s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748588562s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.e( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.504496574s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273277283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.e( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.504448891s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273277283s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1c( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.783625603s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089591980s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1c( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.783585548s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089591980s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.442543983s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748573303s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.442523003s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748573303s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.15( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.345585823s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651748657s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.15( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.345569611s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651748657s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1e( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.783377647s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089591980s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1e( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.783361435s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089591980s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022420883s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.328796387s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.16( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.345199585s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651596069s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022402763s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.328796387s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022214890s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.328628540s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.16( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.345151901s) [2] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651596069s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1f( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.783073425s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 active pruub 99.089599609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[11.1f( empty local-lis/les=53/54 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55 pruub=8.783057213s) [2] r=-1 lpr=55 pi=[53,55)/1 crt=0'0 unknown NOTIFY pruub 99.089599609s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.442059517s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 active pruub 104.748626709s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55 pruub=14.442043304s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 104.748626709s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022079468s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.328628540s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.039016724s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 active pruub 105.808723450s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=11.038716316s) [1] r=-1 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808723450s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022274017s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 active pruub 102.328933716s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022144318s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=0'0 unknown NOTIFY pruub 102.328933716s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=11.959327698s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.266479492s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.17( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.344461441s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 104.651641846s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=11.959308624s) [0] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.266479492s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[3.17( empty local-lis/les=44/45 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.344441414s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 104.651641846s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022048950s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 active pruub 102.329338074s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55 pruub=12.022020340s) [2] r=-1 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 102.329338074s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.f( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.502463341s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273330688s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.f( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.502410889s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273330688s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.10( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.502144814s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273353577s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.10( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.502110481s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273353577s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.11( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501863480s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273338318s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.12( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501894951s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273361206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.12( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501846313s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273361206s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.11( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501830101s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273338318s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.18( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501581192s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273490906s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.8( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.18( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501543999s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273490906s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.13( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501355171s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273361206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.14( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.13( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.501201630s) [2] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273361206s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.14( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.500931740s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 104.273361206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.1b( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[4.14( empty local-lis/les=46/48 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55 pruub=9.500837326s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 104.273361206s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.1f( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.1f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.10( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.18( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.1b( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.10( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.3( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.6( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.3( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.1( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.a( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.1( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.4( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.6( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.9( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.c( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.9( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.6( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.f( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.1a( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.12( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.15( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[7.13( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[8.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[3.17( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.5( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.9( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.1( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.4( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.2( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.7( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.d( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.f( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.10( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.12( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[4.14( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.1c( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.1c( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.18( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.12( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.11( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.1e( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.16( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.11( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.1f( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.1c( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.13( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.11( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.1a( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.1b( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.18( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.11( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.15( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.a( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.e( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.4( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.8( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.a( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.e( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.2( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.5( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.9( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.1( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.2( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.2( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.e( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.8( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.b( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.5( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.1( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.d( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.7( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.c( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.8( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.3( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.12( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.1a( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.1d( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[11.15( empty local-lis/les=0/0 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.1b( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[4.18( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[8.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[7.1a( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[3.1e( empty local-lis/les=0/0 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.18( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.878851891s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991806030s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.18( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.878832817s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991806030s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.167314529s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.280296326s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.167291641s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.280296326s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.104083061s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.217109680s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.19( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.878594398s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991630554s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.104042053s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.217109680s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.19( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.878555298s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991630554s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.104226112s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.217346191s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189161301s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302291870s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.104210854s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.217346191s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189147949s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302291870s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1a( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.878617287s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991798401s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1a( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.878606796s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991798401s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.15( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.104051590s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 active pruub 101.217292786s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189167023s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302429199s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.15( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.104031563s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 101.217292786s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1c( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189154625s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302429199s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189062119s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302429199s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.1b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189049721s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302429199s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103601456s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.217010498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103825569s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.217239380s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189213753s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302635193s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103587151s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.217010498s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103811264s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.217239380s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.a( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189191818s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302635193s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189371109s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302864075s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.9( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189361572s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302864075s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103503227s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.217071533s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877893448s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991485596s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103488922s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.217071533s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877881050s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991485596s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.e( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103183746s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 active pruub 101.216827393s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.e( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103166580s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 101.216827393s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.2( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877678871s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991363525s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189297676s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302986145s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.2( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877664566s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991363525s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.5( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.189286232s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302986145s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.3( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877624512s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991409302s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188730240s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302528381s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188723564s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302520752s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.d( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103091240s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 active pruub 101.216842651s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.3( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877612114s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991409302s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.4( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188717842s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302528381s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.6( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188699722s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302520752s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.d( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.103023529s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 101.216842651s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188564301s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302513123s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.4( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877379417s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991325378s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102876663s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216835022s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.3( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188551903s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302513123s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102863312s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216835022s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.4( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877358437s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991325378s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.5( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877271652s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991294861s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188654900s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302703857s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.5( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877191544s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991294861s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.9( v 54'19 (0'0,54'19] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102583885s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 active pruub 101.216781616s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.2( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188507080s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302703857s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.f( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877210617s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991432190s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.f( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.877181053s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991432190s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.9( v 54'19 (0'0,54'19] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102542877s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 101.216781616s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.1f( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188353539s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302711487s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188275337s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302642822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.8( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188333511s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302711487s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102209091s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216598511s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.7( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.876705170s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991111755s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.14( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102696419s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 active pruub 101.217102051s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.7( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188256264s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302642822s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.7( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.876684189s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991111755s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102190971s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216598511s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.c( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.876526833s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991104126s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101990700s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216606140s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.c( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.876506805s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991104126s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188168526s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302795410s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101971626s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216606140s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.b( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188147545s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302795410s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101770401s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216514587s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188048363s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302810669s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.d( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.188033104s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302810669s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101749420s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216514587s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.9( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.876270294s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.991073608s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.9( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.876251221s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.991073608s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101617813s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216506958s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.187872887s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.302810669s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101516724s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216468811s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101568222s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216506958s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.f( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.187851906s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.302810669s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=51/53 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101493835s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216468811s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.16( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.875943184s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990989685s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.16( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.875925064s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990989685s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.187897682s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.303001404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101291656s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216415405s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.17( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.15( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.875750542s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990882874s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.11( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.187875748s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.303001404s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101273537s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216415405s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.15( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.875728607s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990882874s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101116180s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216339111s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.14( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.875366211s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990951538s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.101061821s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216339111s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.187408447s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.303009033s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.14( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.875344276s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990951538s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.13( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.187376976s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.303009033s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.14( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=15.102664948s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 101.217102051s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.1d( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.16( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.15( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.1c( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.1( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.2( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.3( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.d( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.4( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.5( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.2( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.9( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.8( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.7( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.8( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.b( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.4( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.f( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.7( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.11( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.15( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.14( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.13( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.18( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.19( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.1a( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.2( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.13( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.a( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.9( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.1( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.5( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.4( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.6( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.3( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.f( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.7( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.c( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.d( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.9( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.6( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.16( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.19( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.1a( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.14( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:52:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 29 11:52:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 29 11:52:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 29 11:52:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.13( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.754523277s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990905762s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066597939s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.303001404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.13( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.754481316s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990905762s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.12( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.754443169s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990882874s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.15( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066564560s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.303001404s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066731453s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.303184509s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.12( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.754391670s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990882874s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.16( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066683769s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.303184509s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066503525s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.303260803s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.17( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066485405s) [1] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.303260803s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.979468346s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216255188s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066575050s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.303375244s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.979405403s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216232300s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.979445457s) [0] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216255188s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.18( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066556931s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.303375244s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.979379654s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216232300s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.11( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.753991127s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990959167s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066205978s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 active pruub 100.303184509s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[2.19( empty local-lis/les=44/45 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55 pruub=14.066187859s) [0] r=-1 lpr=55 pi=[44,55)/1 crt=0'0 unknown NOTIFY pruub 100.303184509s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.11( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.753958702s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990959167s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1e( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.753617287s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990699768s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1e( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.753589630s) [0] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990699768s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.16( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.978800774s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 active pruub 101.216217041s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.978782654s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 101.216217041s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.15( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.13( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[10.1e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1d( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.752411842s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 active pruub 96.990852356s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[5.1d( empty local-lis/les=46/49 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55 pruub=10.752389908s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=0'0 unknown NOTIFY pruub 96.990852356s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.12( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.969855309s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 active pruub 101.208358765s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 55 pg[10.12( v 54'19 (0'0,54'19] local-lis/les=51/53 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55 pruub=14.969812393s) [1] r=-1 lpr=55 pi=[51,55)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 101.208358765s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.12( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.18( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.17( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.10( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[2.19( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.11( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.11( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 55 pg[5.1e( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[5.1d( empty local-lis/les=0/0 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[10.12( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 29 11:52:02 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 29 11:52:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 55 pg[2.1b( empty local-lis/les=0/0 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.1a( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.18( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.1( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] r=-1 lpr=56 pi=[51,56)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.230501175s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 active pruub 105.678817749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.230481148s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.678817749s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.359882355s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 active pruub 105.808448792s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.359993935s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 active pruub 105.808631897s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.360019684s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 active pruub 105.808677673s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.359807014s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808448792s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.360005379s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808677673s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=9.359974861s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 105.808631897s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.e( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.6( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.2( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.1d( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.1b( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.1a( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.1e( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.3( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.8( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.c( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.12( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.7( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.15( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.1( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.5( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.d( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.b( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.8( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.e( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=55/56 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.2( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.1( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.5( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.9( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.e( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.2( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.e( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.a( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=55/56 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.15( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.11( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.18( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.1b( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.11( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.1a( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.13( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.a( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.8( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.1c( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.1f( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.11( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.1e( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.16( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[11.11( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [2] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[7.1c( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [2] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[3.18( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [2] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 56 pg[4.1c( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:03 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.11( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 29 11:52:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 29 11:52:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 105 peering, 1 active+clean+scrubbing, 199 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.17( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.1f( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.11( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.12( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.14( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.19( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.1b( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.13( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.15( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.15( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.17( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.16( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.13( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=55/56 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.8( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.9( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.a( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.1( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.b( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.f( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.f( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.3( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.e( v 54'19 lc 38'4 (0'0,54'19] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=54'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.3( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.6( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.e( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.d( v 54'19 lc 38'5 (0'0,54'19] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=54'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.1f( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.2( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.5( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.3( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.f( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.6( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=55/56 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.1c( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.4( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.6( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=55/56 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.9( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.1d( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.14( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.18( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=55/56 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=55/56 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.15( v 54'19 lc 38'3 (0'0,54'19] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=54'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.7( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.c( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.2( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.1( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[10.9( v 54'19 lc 38'8 (0'0,54'19] local-lis/les=55/56 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [0] r=0 lpr=55 pi=[51,55)/1 crt=54'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.4( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.f( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.1f( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[3.1b( empty local-lis/les=55/56 n=0 ec=44/19 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=55/56 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[11.10( empty local-lis/les=55/56 n=0 ec=53/39 lis/c=53/53 les/c/f=54/54/0 sis=55) [0] r=0 lpr=55 pi=[53,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[5.1e( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [0] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.19( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[2.18( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [0] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 56 pg[7.4( empty local-lis/les=55/56 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=55) [0] r=0 lpr=55 pi=[49,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.12( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.13( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.16( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.15( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.9( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=55/56 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.f( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.d( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.a( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.17( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.3( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.c( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.4( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.5( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.7( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.9( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.6( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[2.1b( empty local-lis/les=55/56 n=0 ec=44/18 lis/c=44/44 les/c/f=45/45/0 sis=55) [1] r=0 lpr=55 pi=[44,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.1( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.1d( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.19( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.12( v 54'19 lc 41'17 (0'0,54'19] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=54'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=55/56 n=1 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.1a( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[10.14( v 54'19 lc 38'7 (0'0,54'19] local-lis/les=55/56 n=0 ec=51/37 lis/c=51/51 les/c/f=53/53/0 sis=55) [1] r=0 lpr=55 pi=[51,55)/1 crt=54'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.2( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.4( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[5.18( empty local-lis/les=55/56 n=0 ec=46/21 lis/c=46/46 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.f( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.d( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.5( v 36'39 lc 35'7 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.7( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.5( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.d( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.9( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.8( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.14( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.12( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[4.10( empty local-lis/les=55/56 n=0 ec=46/20 lis/c=46/46 les/c/f=48/48/0 sis=55) [1] r=0 lpr=55 pi=[46,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 56 pg[6.7( v 36'39 lc 35'20 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 29 11:52:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 29 11:52:05 np0005601226 ceph-mgr[75527]: [progress INFO root] Writing back 16 completed events
Jan 29 11:52:06 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 29 11:52:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 105 peering, 1 active+clean+scrubbing, 199 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:52:07 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 29 11:52:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 29 11:52:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:52:08 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 29 11:52:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 3 active+recovery_wait, 2 active+recovery_wait+degraded, 53 peering, 2 active+clean+scrubbing, 1 active+recovering, 244 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 3/249 objects degraded (1.205%)
Jan 29 11:52:08 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 29 11:52:08 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 29 11:52:08 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=56/57 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:08 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[6.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=56/57 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:08 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[6.e( v 36'39 lc 35'17 (0'0,36'39] local-lis/les=56/57 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:08 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=53'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 57 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=56) [0]/[1] async=[0] r=0 lpr=56 pi=[51,56)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3/249 objects degraded (1.205%), 2 pgs degraded (PG_DEGRADED)
Jan 29 11:52:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 16 activating+remapped, 2 activating+degraded, 5 active+recovery_wait, 7 active+recovery_wait+degraded, 2 activating, 2 active+clean+scrubbing, 2 active+recovering, 269 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14/249 objects degraded (5.622%); 101/249 objects misplaced (40.562%); 0 B/s, 1 keys/s, 0 objects/s recovering
Jan 29 11:52:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:52:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:52:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:52:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:52:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:52:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:52:10 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 29 11:52:11 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 29 11:52:11 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 29 11:52:11 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 29 11:52:12 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 29 11:52:12 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 29 11:52:12 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 29 11:52:12 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 29 11:52:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v125: 305 pgs: 16 activating+remapped, 2 activating+degraded, 5 active+recovery_wait, 7 active+recovery_wait+degraded, 2 activating, 2 active+clean+scrubbing, 2 active+recovering, 269 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14/249 objects degraded (5.622%); 101/249 objects misplaced (40.562%); 0 B/s, 1 keys/s, 0 objects/s recovering
Jan 29 11:52:12 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 29 11:52:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:52:13 np0005601226 ceph-mon[75233]: Health check failed: Degraded data redundancy: 3/249 objects degraded (1.205%), 2 pgs degraded (PG_DEGRADED)
Jan 29 11:52:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:14 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 29 11:52:14 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 29 11:52:14 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 29 11:52:14 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 29 11:52:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 16 active+recovery_wait+remapped, 2 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 282 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 101/249 objects misplaced (40.562%); 85 B/s, 0 keys/s, 1 objects/s recovering
Jan 29 11:52:14 np0005601226 systemd[76621]: Starting Mark boot as successful...
Jan 29 11:52:14 np0005601226 systemd[76621]: Finished Mark boot as successful.
Jan 29 11:52:15 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 29 11:52:15 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 29 11:52:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 4/249 objects degraded (1.606%), 4 pgs degraded (PG_DEGRADED)
Jan 29 11:52:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 16 active+recovery_wait+remapped, 2 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 282 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 101/249 objects misplaced (40.562%); 73 B/s, 0 keys/s, 1 objects/s recovering
Jan 29 11:52:16 np0005601226 ceph-mon[75233]: Health check update: Degraded data redundancy: 4/249 objects degraded (1.606%), 4 pgs degraded (PG_DEGRADED)
Jan 29 11:52:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:52:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 29 11:52:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v128: 305 pgs: 16 active+recovery_wait+remapped, 2 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 282 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 101/249 objects misplaced (40.562%); 71 B/s, 0 keys/s, 1 objects/s recovering
Jan 29 11:52:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 29 11:52:19 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 58 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=58 pruub=13.240468025s) [0] async=[0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 active pruub 121.554420471s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 58 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=58 pruub=13.240275383s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554420471s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 58 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 58 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 14 active+recovery_wait+remapped, 289 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 95/249 objects misplaced (38.153%); 127 B/s, 2 objects/s recovering
Jan 29 11:52:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 29 11:52:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 29 11:52:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 59 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59 pruub=12.558321953s) [0] async=[0] r=-1 lpr=59 pi=[51,59)/1 crt=41'483 lcod 0'0 active pruub 121.554611206s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 59 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59 pruub=12.558142662s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554611206s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 59 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59 pruub=12.557742119s) [0] async=[0] r=-1 lpr=59 pi=[51,59)/1 crt=41'483 lcod 0'0 active pruub 121.554527283s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 59 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59 pruub=12.557682037s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554527283s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 59 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59 pruub=12.557107925s) [0] async=[0] r=-1 lpr=59 pi=[51,59)/1 crt=41'483 lcod 0'0 active pruub 121.554519653s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 59 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59 pruub=12.557073593s) [0] r=-1 lpr=59 pi=[51,59)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554519653s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 59 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 59 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 59 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 59 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 59 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 59 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:20 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 4 pgs degraded)
Jan 29 11:52:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 29 11:52:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 29 11:52:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 29 11:52:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.066575050s) [0] async=[0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 active pruub 121.554756165s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.066499710s) [0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554756165s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.066304207s) [0] async=[0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 active pruub 121.554885864s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.066223145s) [0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554885864s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.065324783s) [0] async=[0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 active pruub 121.554687500s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.065266609s) [0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554687500s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.064219475s) [0] async=[0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 active pruub 121.554695129s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 60 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60 pruub=11.064158440s) [0] r=-1 lpr=60 pi=[51,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554695129s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 14 active+recovery_wait+remapped, 289 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 95/249 objects misplaced (38.153%); 69 B/s, 1 objects/s recovering
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=59/60 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:22 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 60 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:22 np0005601226 ceph-mon[75233]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 4 pgs degraded)
Jan 29 11:52:22 np0005601226 ceph-mon[75233]: Cluster is now healthy
Jan 29 11:52:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 29 11:52:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 29 11:52:23 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 29 11:52:23 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 29 11:52:23 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 29 11:52:23 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 61 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=61 pruub=9.901328087s) [0] async=[0] r=-1 lpr=61 pi=[51,61)/1 crt=41'483 lcod 0'0 active pruub 121.554878235s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:23 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 61 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=61 pruub=9.901224136s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.554878235s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:23 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 61 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:23 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 61 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:23 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 61 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=60/61 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:23 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 61 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=60/61 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:23 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 61 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=60/61 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:23 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 61 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=60/61 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=60) [0] r=0 lpr=60 pi=[51,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 29 11:52:24 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 29 11:52:24 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 29 11:52:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 1 active+recovering+remapped, 2 active+remapped, 4 peering, 5 active+recovery_wait+remapped, 293 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37/249 objects misplaced (14.859%); 451 B/s, 12 objects/s recovering
Jan 29 11:52:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 29 11:52:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 29 11:52:24 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 62 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62 pruub=8.822751045s) [0] async=[0] r=-1 lpr=62 pi=[51,62)/1 crt=41'483 lcod 0'0 active pruub 121.555290222s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:24 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 62 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62 pruub=8.822627068s) [0] r=-1 lpr=62 pi=[51,62)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.555290222s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:24 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 62 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:24 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 62 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:24 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 62 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:24 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 62 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62 pruub=8.819762230s) [0] async=[0] r=-1 lpr=62 pi=[51,62)/1 crt=41'483 lcod 0'0 active pruub 121.555061340s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:24 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 62 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:24 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 62 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62 pruub=8.819684982s) [0] r=-1 lpr=62 pi=[51,62)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 121.555061340s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:24 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=61/62 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=61) [0] r=0 lpr=61 pi=[51,61)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:25 np0005601226 python3[98899]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:52:25 np0005601226 podman[98900]: 2026-01-29 16:52:25.127860999 +0000 UTC m=+0.032867108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:52:25 np0005601226 podman[98900]: 2026-01-29 16:52:25.352482395 +0000 UTC m=+0.257488524 container create 940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871 (image=quay.io/ceph/ceph:v20, name=lucid_germain, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:52:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 29 11:52:25 np0005601226 systemd[1]: Started libpod-conmon-940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871.scope.
Jan 29 11:52:25 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a96382e6cad633993916cbb01368e43d01d4d96569888e482bad2249fa65f8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a96382e6cad633993916cbb01368e43d01d4d96569888e482bad2249fa65f8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 29 11:52:25 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 29 11:52:25 np0005601226 podman[98900]: 2026-01-29 16:52:25.906690342 +0000 UTC m=+0.811696461 container init 940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871 (image=quay.io/ceph/ceph:v20, name=lucid_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:52:25 np0005601226 podman[98900]: 2026-01-29 16:52:25.915906705 +0000 UTC m=+0.820912834 container start 940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871 (image=quay.io/ceph/ceph:v20, name=lucid_germain, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:52:25 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 63 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63 pruub=15.350671768s) [0] async=[0] r=-1 lpr=63 pi=[51,63)/1 crt=41'483 lcod 0'0 active pruub 129.555236816s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:25 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 63 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63 pruub=15.350444794s) [0] r=-1 lpr=63 pi=[51,63)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 129.555236816s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:25 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 63 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63 pruub=15.349716187s) [0] async=[0] r=-1 lpr=63 pi=[51,63)/1 crt=41'483 lcod 0'0 active pruub 129.555435181s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:25 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 63 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=56/57 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63 pruub=15.349605560s) [0] r=-1 lpr=63 pi=[51,63)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 129.555435181s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:25 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 63 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63) [0] r=0 lpr=63 pi=[51,63)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:25 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 63 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63) [0] r=0 lpr=63 pi=[51,63)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:25 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 63 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63) [0] r=0 lpr=63 pi=[51,63)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:25 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 63 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63) [0] r=0 lpr=63 pi=[51,63)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:26 np0005601226 podman[98900]: 2026-01-29 16:52:26.001598308 +0000 UTC m=+0.906604477 container attach 940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871 (image=quay.io/ceph/ceph:v20, name=lucid_germain, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:52:26 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 63 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:26 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 63 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 1 active+recovering+remapped, 2 active+remapped, 4 peering, 5 active+recovery_wait+remapped, 293 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37/249 objects misplaced (14.859%); 473 B/s, 13 objects/s recovering
Jan 29 11:52:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 29 11:52:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 29 11:52:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 29 11:52:27 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 64 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=64 pruub=14.113471031s) [0] async=[0] r=-1 lpr=64 pi=[51,64)/1 crt=41'483 lcod 0'0 active pruub 129.555252075s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:27 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 64 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=64 pruub=14.112942696s) [0] r=-1 lpr=64 pi=[51,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 129.555252075s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:27 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 64 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=64) [0] r=0 lpr=64 pi=[51,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:27 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 64 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=64) [0] r=0 lpr=64 pi=[51,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:27 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63) [0] r=0 lpr=63 pi=[51,63)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:27 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=63/64 n=6 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=63) [0] r=0 lpr=63 pi=[51,63)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 29 11:52:28 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 29 11:52:28 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 29 11:52:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 29 11:52:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 29 11:52:28 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 65 pg[9.5( v 57'485 (0'0,57'485] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65) [0] r=0 lpr=65 pi=[51,65)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:28 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 65 pg[9.5( v 57'485 (0'0,57'485] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65) [0] r=0 lpr=65 pi=[51,65)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:28 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 65 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65) [0] r=0 lpr=65 pi=[51,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:28 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 65 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65) [0] r=0 lpr=65 pi=[51,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 1 peering, 2 active+recovery_wait+remapped, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s; 13/249 objects misplaced (5.221%); 367 B/s, 5 objects/s recovering
Jan 29 11:52:28 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 65 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65 pruub=12.961039543s) [0] async=[0] r=-1 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 active pruub 129.555389404s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:28 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 65 pg[9.5( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65 pruub=12.960963249s) [0] async=[0] r=-1 lpr=65 pi=[51,65)/1 crt=53'484 lcod 53'484 active pruub 129.555389404s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:28 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 65 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65 pruub=12.960954666s) [0] r=-1 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 129.555389404s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:28 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 65 pg[9.5( v 57'485 (0'0,57'485] local-lis/les=56/57 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65 pruub=12.960895538s) [0] r=-1 lpr=65 pi=[51,65)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 129.555389404s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:28 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=64/65 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=64) [0] r=0 lpr=64 pi=[51,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:28 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 29 11:52:28 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 29 11:52:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 29 11:52:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 29 11:52:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 29 11:52:29 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 29 11:52:29 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 29 11:52:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 29 11:52:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 29 11:52:29 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 66 pg[9.5( v 57'485 (0'0,57'485] local-lis/les=65/66 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65) [0] r=0 lpr=65 pi=[51,65)/1 crt=57'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:29 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 66 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=51/35 lis/c=56/51 les/c/f=57/52/0 sis=65) [0] r=0 lpr=65 pi=[51,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:30 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 29 11:52:30 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 29 11:52:30 np0005601226 lucid_germain[98915]: could not fetch user info: no user info saved
Jan 29 11:52:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+recovery_wait+remapped, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 8 op/s; 13/249 objects misplaced (5.221%); 320 B/s, 5 objects/s recovering
Jan 29 11:52:30 np0005601226 systemd[1]: libpod-940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871.scope: Deactivated successfully.
Jan 29 11:52:30 np0005601226 podman[98900]: 2026-01-29 16:52:30.580268694 +0000 UTC m=+5.485274793 container died 940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871 (image=quay.io/ceph/ceph:v20, name=lucid_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 11:52:30 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 29 11:52:30 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 29 11:52:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8a96382e6cad633993916cbb01368e43d01d4d96569888e482bad2249fa65f8b-merged.mount: Deactivated successfully.
Jan 29 11:52:31 np0005601226 podman[98900]: 2026-01-29 16:52:31.277574302 +0000 UTC m=+6.182580381 container remove 940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871 (image=quay.io/ceph/ceph:v20, name=lucid_germain, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:52:31 np0005601226 systemd[1]: libpod-conmon-940c76a754cd714e33f8bfc5cd6a6d7920fce840bd807dd22057682c72022871.scope: Deactivated successfully.
Jan 29 11:52:31 np0005601226 python3[99037]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid cc5c72e3-31e0-58b9-8731-456117d38f4a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:52:31 np0005601226 podman[99038]: 2026-01-29 16:52:31.768499284 +0000 UTC m=+0.102733052 container create 6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8 (image=quay.io/ceph/ceph:v20, name=pensive_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 11:52:31 np0005601226 podman[99038]: 2026-01-29 16:52:31.69999476 +0000 UTC m=+0.034228548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 29 11:52:31 np0005601226 systemd[1]: Started libpod-conmon-6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8.scope.
Jan 29 11:52:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/711457fdd22d4472c5104de7de057cd031e334238c9b622ed6f4d6c8bba3fc76/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/711457fdd22d4472c5104de7de057cd031e334238c9b622ed6f4d6c8bba3fc76/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:32 np0005601226 podman[99038]: 2026-01-29 16:52:32.070778125 +0000 UTC m=+0.405011943 container init 6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8 (image=quay.io/ceph/ceph:v20, name=pensive_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 11:52:32 np0005601226 podman[99038]: 2026-01-29 16:52:32.075120189 +0000 UTC m=+0.409353927 container start 6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8 (image=quay.io/ceph/ceph:v20, name=pensive_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:52:32 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 29 11:52:32 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 29 11:52:32 np0005601226 podman[99038]: 2026-01-29 16:52:32.123785907 +0000 UTC m=+0.458019735 container attach 6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8 (image=quay.io/ceph/ceph:v20, name=pensive_jang, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 11:52:32 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 29 11:52:32 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 29 11:52:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+recovery_wait+remapped, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 6 op/s; 13/249 objects misplaced (5.221%); 244 B/s, 3 objects/s recovering
Jan 29 11:52:32 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 29 11:52:32 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 29 11:52:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:52:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:52:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:52:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:52:33 np0005601226 podman[99280]: 2026-01-29 16:52:33.422492936 +0000 UTC m=+0.021876334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:52:33 np0005601226 podman[99280]: 2026-01-29 16:52:33.747814554 +0000 UTC m=+0.347197902 container create 43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:52:33 np0005601226 systemd[1]: Started libpod-conmon-43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589.scope.
Jan 29 11:52:33 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:33 np0005601226 pensive_jang[99054]: {
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "user_id": "openstack",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "display_name": "openstack",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "email": "",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "suspended": 0,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "max_buckets": 1000,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "subusers": [],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "keys": [
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        {
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:            "user": "openstack",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:            "access_key": "ZV7Q8T3K5BIPU0L4ZA61",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:            "secret_key": "HDsyQ1tJZ8haJufYduVb88Sua2dD8NAAhpQM8vBQ",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:            "active": true,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:            "create_date": "2026-01-29T16:52:33.762048Z"
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        }
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    ],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "swift_keys": [],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "caps": [],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "op_mask": "read, write, delete",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "default_placement": "",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "default_storage_class": "",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "placement_tags": [],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "bucket_quota": {
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "enabled": false,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "check_on_raw": false,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "max_size": -1,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "max_size_kb": 0,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "max_objects": -1
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    },
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "user_quota": {
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "enabled": false,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "check_on_raw": false,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "max_size": -1,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "max_size_kb": 0,
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:        "max_objects": -1
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    },
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "temp_url_keys": [],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "type": "rgw",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "mfa_ids": [],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "account_id": "",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "path": "/",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "create_date": "2026-01-29T16:52:33.761595Z",
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "tags": [],
Jan 29 11:52:33 np0005601226 pensive_jang[99054]:    "group_ids": []
Jan 29 11:52:33 np0005601226 pensive_jang[99054]: }
Jan 29 11:52:33 np0005601226 pensive_jang[99054]: 
Jan 29 11:52:33 np0005601226 podman[99280]: 2026-01-29 16:52:33.971515784 +0000 UTC m=+0.570899192 container init 43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 11:52:33 np0005601226 podman[99280]: 2026-01-29 16:52:33.98085267 +0000 UTC m=+0.580236008 container start 43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_stonebraker, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 11:52:33 np0005601226 sharp_stonebraker[99299]: 167 167
Jan 29 11:52:33 np0005601226 systemd[1]: libpod-43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589.scope: Deactivated successfully.
Jan 29 11:52:34 np0005601226 podman[99280]: 2026-01-29 16:52:34.028150109 +0000 UTC m=+0.627533517 container attach 43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_stonebraker, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:52:34 np0005601226 podman[99280]: 2026-01-29 16:52:34.029187518 +0000 UTC m=+0.628570896 container died 43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 11:52:34 np0005601226 systemd[1]: libpod-6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8.scope: Deactivated successfully.
Jan 29 11:52:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 36 op/s; 259 B/s, 5 objects/s recovering
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 29 11:52:34 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cf5fb206a9045b909482bffda860f7a8ffaa79172f2a9ba677a074f99a5ff1a1-merged.mount: Deactivated successfully.
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 29 11:52:34 np0005601226 podman[99280]: 2026-01-29 16:52:34.890861484 +0000 UTC m=+1.490244862 container remove 43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_stonebraker, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:52:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 29 11:52:34 np0005601226 podman[99038]: 2026-01-29 16:52:34.938476392 +0000 UTC m=+3.272710130 container died 6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8 (image=quay.io/ceph/ceph:v20, name=pensive_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:52:35 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 29 11:52:35 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 29 11:52:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 29 11:52:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 29 11:52:35 np0005601226 systemd[1]: var-lib-containers-storage-overlay-711457fdd22d4472c5104de7de057cd031e334238c9b622ed6f4d6c8bba3fc76-merged.mount: Deactivated successfully.
Jan 29 11:52:35 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 29 11:52:35 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 29 11:52:36 np0005601226 podman[99317]: 2026-01-29 16:52:36.184277796 +0000 UTC m=+2.063329091 container remove 6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8 (image=quay.io/ceph/ceph:v20, name=pensive_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 11:52:36 np0005601226 systemd[1]: libpod-conmon-6f2224e20382013d5c1d187f0735537c432051428b4331c77862830f4d6a71a8.scope: Deactivated successfully.
Jan 29 11:52:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 29 11:52:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 29 11:52:36 np0005601226 systemd[1]: libpod-conmon-43af249c6d722059c57356aa0824ef91e33f7a44e890552e7de004ca3e211589.scope: Deactivated successfully.
Jan 29 11:52:36 np0005601226 podman[99335]: 2026-01-29 16:52:36.307974907 +0000 UTC m=+1.300651933 container create 079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 11:52:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 31 op/s; 52 B/s, 1 objects/s recovering
Jan 29 11:52:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 29 11:52:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 29 11:52:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:36 np0005601226 podman[99335]: 2026-01-29 16:52:36.271303098 +0000 UTC m=+1.263980144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:52:36 np0005601226 systemd[1]: Started libpod-conmon-079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f.scope.
Jan 29 11:52:36 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeaf639ddbc03adbcfe6b9a0524e38884c15e41473ebb81e8669218dd15884f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeaf639ddbc03adbcfe6b9a0524e38884c15e41473ebb81e8669218dd15884f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeaf639ddbc03adbcfe6b9a0524e38884c15e41473ebb81e8669218dd15884f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeaf639ddbc03adbcfe6b9a0524e38884c15e41473ebb81e8669218dd15884f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:36 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eeaf639ddbc03adbcfe6b9a0524e38884c15e41473ebb81e8669218dd15884f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:36 np0005601226 podman[99335]: 2026-01-29 16:52:36.645338631 +0000 UTC m=+1.638015667 container init 079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_panini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:52:36 np0005601226 podman[99335]: 2026-01-29 16:52:36.651806968 +0000 UTC m=+1.644483984 container start 079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 11:52:36 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 29 11:52:36 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 29 11:52:36 np0005601226 podman[99335]: 2026-01-29 16:52:36.817694469 +0000 UTC m=+1.810371495 container attach 079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:52:37 np0005601226 practical_panini[99353]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:52:37 np0005601226 practical_panini[99353]: --> All data devices are unavailable
Jan 29 11:52:37 np0005601226 systemd[1]: libpod-079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f.scope: Deactivated successfully.
Jan 29 11:52:37 np0005601226 podman[99373]: 2026-01-29 16:52:37.129397975 +0000 UTC m=+0.025045774 container died 079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_panini, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:52:37 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 29 11:52:37 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 29 11:52:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 29 11:52:37 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 29 11:52:37 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:37 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:37 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 29 11:52:38 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 29 11:52:38 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 29 11:52:38 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2eeaf639ddbc03adbcfe6b9a0524e38884c15e41473ebb81e8669218dd15884f-merged.mount: Deactivated successfully.
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:52:38
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data']
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 7/10 upmap changes
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] Executing plan auto_2026-01-29_16:52:38
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 6.5 mappings [{'from': 1, 'to': 2}]
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.6 mappings [{'from': 1, 'to': 2}]
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.e mappings [{'from': 1, 'to': 2}]
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.11 mappings [{'from': 0, 'to': 2}]
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.5", "id": [1, 2]} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.5", "id": [1, 2]} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.16 mappings [{'from': 1, 'to': 2}]
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.6", "id": [1, 2]} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.6", "id": [1, 2]} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [1, 2]} v 0)
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.1b mappings [{'from': 0, 'to': 2}]
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [1, 2]} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [0, 2]} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [0, 2]} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: [balancer INFO root] ceph osd pg-upmap-items 9.1d mappings [{'from': 0, 'to': 2}]
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1b", "id": [0, 2]} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1b", "id": [0, 2]} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 116 B/s wr, 27 op/s; 47 B/s, 1 objects/s recovering
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.929677010s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 active pruub 140.624389648s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.929532051s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 unknown NOTIFY pruub 140.624389648s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.929541588s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 active pruub 140.624755859s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.929487228s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 unknown NOTIFY pruub 140.624755859s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.928957939s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 active pruub 140.624374390s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.929037094s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 active pruub 140.624511719s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.928857803s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 unknown NOTIFY pruub 140.624511719s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:38 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 67 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67 pruub=13.928776741s) [0] r=-1 lpr=67 pi=[55,67)/1 crt=36'39 unknown NOTIFY pruub 140.624374390s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 29 11:52:38 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:38 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 67 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:38 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 67 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:38 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 67 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 29 11:52:39 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 29 11:52:39 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 29 11:52:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 29 11:52:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.5", "id": [1, 2]} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.6", "id": [1, 2]} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [1, 2]} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [0, 2]} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1b", "id": [0, 2]} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 29 11:52:39 np0005601226 podman[99373]: 2026-01-29 16:52:39.505195315 +0000 UTC m=+2.400843094 container remove 079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_panini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:52:39 np0005601226 systemd[1]: libpod-conmon-079182c8c781eebfb37a2e04b86d0962553402e38b550689ec7151ed53ee439f.scope: Deactivated successfully.
Jan 29 11:52:39 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 68 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=13.507520676s) [1] r=-1 lpr=68 pi=[48,68)/1 crt=36'39 lcod 0'0 active pruub 145.809921265s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:39 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 68 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=13.507474899s) [1] r=-1 lpr=68 pi=[48,68)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 145.809921265s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:39 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 68 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=13.506407738s) [1] r=-1 lpr=68 pi=[48,68)/1 crt=36'39 lcod 0'0 active pruub 145.809585571s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:39 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 68 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68 pruub=13.506183624s) [1] r=-1 lpr=68 pi=[48,68)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 145.809585571s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:39 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 68 pg[6.4( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68) [1] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:39 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 68 pg[6.c( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68) [1] r=0 lpr=68 pi=[48,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:39 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 29 11:52:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 29 11:52:39 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 29 11:52:39 np0005601226 podman[99450]: 2026-01-29 16:52:39.898488475 +0000 UTC m=+0.018185787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 29 11:52:40 np0005601226 podman[99450]: 2026-01-29 16:52:40.231584123 +0000 UTC m=+0.351281405 container create ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hermann, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 255 B/s wr, 30 op/s; 52 B/s, 1 objects/s recovering
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.5", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.6", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [0, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1b", "id": [0, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 29 11:52:40 np0005601226 systemd[1]: Started libpod-conmon-ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c.scope.
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e69 crush map has features 3314933000854323200, adjusting msgr requires
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e69 crush map has features 432629239337189376, adjusting msgr requires
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e69 crush map has features 432629239337189376, adjusting msgr requires
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e69 crush map has features 432629239337189376, adjusting msgr requires
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 29 11:52:40 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:52:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 69 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 69 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 69 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=15.825350761s) [2] r=-1 lpr=69 pi=[51,69)/1 crt=41'483 lcod 0'0 active pruub 144.749130249s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=15.825286865s) [2] r=-1 lpr=69 pi=[51,69)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 144.749130249s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=69 pruub=11.699378967s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=36'39 active pruub 140.624359131s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=69 pruub=11.699324608s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=36'39 unknown NOTIFY pruub 140.624359131s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=15.823897362s) [2] r=-1 lpr=69 pi=[51,69)/1 crt=41'483 lcod 0'0 active pruub 144.749206543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=15.823875427s) [2] r=-1 lpr=69 pi=[51,69)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 144.749206543s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=15.823007584s) [2] r=-1 lpr=69 pi=[51,69)/1 crt=66'488 lcod 66'488 active pruub 144.749114990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69 pruub=15.822856903s) [2] r=-1 lpr=69 pi=[51,69)/1 crt=66'488 lcod 66'488 unknown NOTIFY pruub 144.749114990s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 69 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 69 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 69 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=62/63 n=7 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.279077530s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=66'486 lcod 66'486 active pruub 142.751907349s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=62/63 n=7 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.279002190s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=66'486 lcod 66'486 unknown NOTIFY pruub 142.751907349s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=13.639170647s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=41'483 active pruub 147.113464355s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=13.639035225s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=41'483 unknown NOTIFY pruub 147.113464355s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=13.638564110s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=66'484 lcod 66'484 active pruub 147.113433838s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:40 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=69 pruub=13.638495445s) [2] r=-1 lpr=69 pi=[59,69)/1 crt=66'484 lcod 66'484 unknown NOTIFY pruub 147.113433838s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 69 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 69 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 69 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 69 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=69) [2] r=0 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 69 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 69 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 69 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 69 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 69 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 69 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2] r=0 lpr=69 pi=[51,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:40 np0005601226 podman[99450]: 2026-01-29 16:52:40.920322294 +0000 UTC m=+1.040019656 container init ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hermann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.5", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.6", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.11", "id": [0, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.16", "id": [1, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1b", "id": [0, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 2]}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:40 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 29 11:52:40 np0005601226 podman[99450]: 2026-01-29 16:52:40.92823845 +0000 UTC m=+1.047935712 container start ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:52:40 np0005601226 cool_hermann[99467]: 167 167
Jan 29 11:52:40 np0005601226 systemd[1]: libpod-ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c.scope: Deactivated successfully.
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[6.c( v 36'39 lc 35'16 (0'0,36'39] local-lis/les=68/69 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68) [1] r=0 lpr=68 pi=[48,68)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:40 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 69 pg[6.4( v 36'39 lc 35'15 (0'0,36'39] local-lis/les=68/69 n=2 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=68) [1] r=0 lpr=68 pi=[48,68)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:41 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=67/69 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:41 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[6.7( v 36'39 lc 35'20 (0'0,36'39] local-lis/les=67/69 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:41 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=67/69 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=36'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:41 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 69 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=67/69 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=67) [0] r=0 lpr=67 pi=[55,67)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:41 np0005601226 podman[99450]: 2026-01-29 16:52:41.226631443 +0000 UTC m=+1.346328705 container attach ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hermann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:52:41 np0005601226 podman[99450]: 2026-01-29 16:52:41.226988582 +0000 UTC m=+1.346685844 container died ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hermann, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 11:52:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 29 11:52:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a4de6f3dc900c639a94261bd6abf24730add08f81e2dee6280f47dcecf4b1d50-merged.mount: Deactivated successfully.
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 29 11:52:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 29 11:52:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[6.5( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=-1 lpr=70 pi=[55,70)/1 crt=36'39 unknown m=2 mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[6.5( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=-1 lpr=70 pi=[55,70)/1 crt=36'39 unknown NOTIFY m=2 mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=-1 lpr=70 pi=[51,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[62,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=66'484 lcod 66'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=66'484 lcod 66'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=62/63 n=7 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=0 lpr=70 pi=[59,70)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=62/63 n=7 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=0 lpr=70 pi=[51,70)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=0 lpr=70 pi=[51,70)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=0 lpr=70 pi=[51,70)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=0 lpr=70 pi=[51,70)/1 crt=66'488 lcod 66'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=0 lpr=70 pi=[51,70)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=-1 lpr=70 pi=[55,70)/1 crt=36'39 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role -1 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=55/56 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=-1 lpr=70 pi=[55,70)/1 crt=36'39 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] r=0 lpr=70 pi=[51,70)/1 crt=66'488 lcod 66'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70 pruub=9.461351395s) [0] r=-1 lpr=70 pi=[55,70)/1 crt=36'39 active pruub 140.624343872s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 70 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70 pruub=9.461076736s) [0] r=-1 lpr=70 pi=[55,70)/1 crt=36'39 unknown NOTIFY pruub 140.624343872s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:42 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 70 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:42 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 29 11:52:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 29 11:52:43 np0005601226 podman[99450]: 2026-01-29 16:52:43.692348314 +0000 UTC m=+3.812045566 container remove ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:52:43 np0005601226 systemd[1]: libpod-conmon-ba04948a2e2ffa6480dc54273c724ce9b5a3b202326b0625f19ccfa52129666c.scope: Deactivated successfully.
Jan 29 11:52:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 29 11:52:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 29 11:52:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 29 11:52:43 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 29 11:52:43 np0005601226 podman[99492]: 2026-01-29 16:52:43.830385827 +0000 UTC m=+0.027342277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:52:44 np0005601226 podman[99492]: 2026-01-29 16:52:44.166029444 +0000 UTC m=+0.362985814 container create 837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_germain, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:52:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 29 11:52:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 29 11:52:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 29 11:52:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 29 11:52:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 71 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=71 pruub=12.291930199s) [2] r=-1 lpr=71 pi=[51,71)/1 crt=65'484 lcod 65'484 active pruub 144.749359131s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 71 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=71 pruub=12.291897774s) [2] r=-1 lpr=71 pi=[51,71)/1 crt=65'484 lcod 65'484 unknown NOTIFY pruub 144.749359131s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:44 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 71 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=71) [2] r=0 lpr=71 pi=[51,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:44 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 71 pg[6.5( v 36'39 lc 35'7 (0'0,36'39] local-lis/les=70/71 n=2 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 71 pg[6.d( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=70/71 n=1 ec=48/22 lis/c=55/55 les/c/f=56/57/0 sis=70) [0] r=0 lpr=70 pi=[55,70)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 71 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=70/71 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[59,70)/1 crt=66'485 lcod 66'484 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 systemd[1]: Started libpod-conmon-837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531.scope.
Jan 29 11:52:44 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6476ac63aee95a8e17d8b7d5df952ddb13b5059dd8344d46767a07fe7271be5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6476ac63aee95a8e17d8b7d5df952ddb13b5059dd8344d46767a07fe7271be5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6476ac63aee95a8e17d8b7d5df952ddb13b5059dd8344d46767a07fe7271be5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6476ac63aee95a8e17d8b7d5df952ddb13b5059dd8344d46767a07fe7271be5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 6 remapped+peering, 2 peering, 1 active+recovering, 296 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/252 objects misplaced (0.397%); 411 B/s, 1 keys/s, 2 objects/s recovering
Jan 29 11:52:44 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 71 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=70/71 n=6 ec=51/35 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[59,70)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 71 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=70/71 n=7 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=66'487 lcod 66'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 71 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=70/71 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[51,70)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 71 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=70/71 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[51,70)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 71 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=70/71 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[51,70)/1 crt=66'489 lcod 66'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:44 np0005601226 podman[99492]: 2026-01-29 16:52:44.586087543 +0000 UTC m=+0.783043973 container init 837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_germain, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 11:52:44 np0005601226 podman[99492]: 2026-01-29 16:52:44.590859604 +0000 UTC m=+0.787815964 container start 837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]: {
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:    "0": [
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:        {
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "devices": [
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "/dev/loop3"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            ],
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_name": "ceph_lv0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_size": "21470642176",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "name": "ceph_lv0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "tags": {
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cluster_name": "ceph",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.crush_device_class": "",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.encrypted": "0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.objectstore": "bluestore",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osd_id": "0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.type": "block",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.vdo": "0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.with_tpm": "0"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            },
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "type": "block",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "vg_name": "ceph_vg0"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:        }
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:    ],
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:    "1": [
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:        {
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "devices": [
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "/dev/loop4"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            ],
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_name": "ceph_lv1",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_size": "21470642176",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "name": "ceph_lv1",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "tags": {
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cluster_name": "ceph",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.crush_device_class": "",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.encrypted": "0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.objectstore": "bluestore",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osd_id": "1",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.type": "block",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.vdo": "0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.with_tpm": "0"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            },
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "type": "block",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "vg_name": "ceph_vg1"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:        }
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:    ],
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:    "2": [
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:        {
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "devices": [
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "/dev/loop5"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            ],
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_name": "ceph_lv2",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_size": "21470642176",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "name": "ceph_lv2",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "tags": {
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.cluster_name": "ceph",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.crush_device_class": "",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.encrypted": "0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.objectstore": "bluestore",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osd_id": "2",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.type": "block",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.vdo": "0",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:                "ceph.with_tpm": "0"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            },
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "type": "block",
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:            "vg_name": "ceph_vg2"
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:        }
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]:    ]
Jan 29 11:52:44 np0005601226 compassionate_germain[99508]: }
Jan 29 11:52:44 np0005601226 systemd[1]: libpod-837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531.scope: Deactivated successfully.
Jan 29 11:52:44 np0005601226 podman[99492]: 2026-01-29 16:52:44.852225066 +0000 UTC m=+1.049181476 container attach 837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_germain, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 11:52:44 np0005601226 podman[99492]: 2026-01-29 16:52:44.853271575 +0000 UTC m=+1.050227945 container died 837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:52:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 29 11:52:45 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6476ac63aee95a8e17d8b7d5df952ddb13b5059dd8344d46767a07fe7271be5a-merged.mount: Deactivated successfully.
Jan 29 11:52:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 29 11:52:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 29 11:52:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 29 11:52:46 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[51,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:46 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 72 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[51,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 29 11:52:46 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 72 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=72) [2]/[1] r=0 lpr=72 pi=[51,72)/1 crt=65'484 lcod 65'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:46 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 72 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=72) [2]/[1] r=0 lpr=72 pi=[51,72)/1 crt=65'484 lcod 65'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v157: 305 pgs: 6 remapped+peering, 2 peering, 1 active+recovering, 296 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/252 objects misplaced (0.397%); 380 B/s, 1 keys/s, 1 objects/s recovering
Jan 29 11:52:46 np0005601226 podman[99492]: 2026-01-29 16:52:46.602398327 +0000 UTC m=+2.799354687 container remove 837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_germain, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 11:52:46 np0005601226 systemd[1]: libpod-conmon-837772c76af3e9b83be397e33a4e3da66ad3dd79469cdcdc8ffc6890b673a531.scope: Deactivated successfully.
Jan 29 11:52:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 29 11:52:47 np0005601226 podman[99591]: 2026-01-29 16:52:47.048547097 +0000 UTC m=+0.030968325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:52:47 np0005601226 podman[99591]: 2026-01-29 16:52:47.326829781 +0000 UTC m=+0.309250949 container create 21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:52:47 np0005601226 systemd[1]: Started libpod-conmon-21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f.scope.
Jan 29 11:52:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 29 11:52:48 np0005601226 podman[99591]: 2026-01-29 16:52:48.112050502 +0000 UTC m=+1.094471690 container init 21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_payne, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 11:52:48 np0005601226 podman[99591]: 2026-01-29 16:52:48.121612402 +0000 UTC m=+1.104033570 container start 21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_payne, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:52:48 np0005601226 eloquent_payne[99607]: 167 167
Jan 29 11:52:48 np0005601226 systemd[1]: libpod-21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f.scope: Deactivated successfully.
Jan 29 11:52:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 29 11:52:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 1 active+remapped, 2 active+recovery_wait+remapped, 3 remapped+peering, 299 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12/252 objects misplaced (4.762%); 448 B/s, 1 keys/s, 3 objects/s recovering
Jan 29 11:52:48 np0005601226 podman[99591]: 2026-01-29 16:52:48.544771066 +0000 UTC m=+1.527192214 container attach 21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_payne, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 11:52:48 np0005601226 podman[99591]: 2026-01-29 16:52:48.545903576 +0000 UTC m=+1.528324744 container died 21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:52:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=70/71 n=6 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73 pruub=11.970460892s) [2] async=[2] r=-1 lpr=73 pi=[51,73)/1 crt=41'483 lcod 0'0 active pruub 148.761489868s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=70/71 n=6 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73 pruub=11.970274925s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 148.761489868s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=70/71 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73 pruub=11.969849586s) [2] async=[2] r=-1 lpr=73 pi=[51,73)/1 crt=41'483 lcod 0'0 active pruub 148.761611938s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=70/71 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73 pruub=11.969731331s) [2] r=-1 lpr=73 pi=[51,73)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 148.761611938s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:52:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 29 11:52:48 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73) [2] r=0 lpr=73 pi=[51,73)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:48 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73) [2] r=0 lpr=73 pi=[51,73)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:48 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73) [2] r=0 lpr=73 pi=[51,73)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:48 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73) [2] r=0 lpr=73 pi=[51,73)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:49 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 73 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=72/73 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[51,72)/1 crt=66'485 lcod 65'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 29 11:52:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2456d2728c56135dc1cfaf347bb8d67c731aef868abde400d792424c8f8ec723-merged.mount: Deactivated successfully.
Jan 29 11:52:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 29 11:52:49 np0005601226 systemd-logind[823]: New session 34 of user zuul.
Jan 29 11:52:49 np0005601226 systemd[1]: Started Session 34 of User zuul.
Jan 29 11:52:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 74 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=70/71 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=74 pruub=10.646280289s) [2] async=[2] r=-1 lpr=74 pi=[59,74)/1 crt=66'485 lcod 66'484 active pruub 152.984039307s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:49 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 74 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=70/71 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=74 pruub=10.646234512s) [2] r=-1 lpr=74 pi=[59,74)/1 crt=66'485 lcod 66'484 unknown NOTIFY pruub 152.984039307s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 74 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=74) [2] r=0 lpr=74 pi=[59,74)/1 pct=0'0 crt=66'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 74 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=74) [2] r=0 lpr=74 pi=[59,74)/1 crt=66'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:49 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 74 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=70/71 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=74 pruub=10.769055367s) [2] async=[2] r=-1 lpr=74 pi=[51,74)/1 crt=66'489 lcod 66'488 active pruub 148.761917114s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:49 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 74 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=70/71 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=74 pruub=10.768926620s) [2] r=-1 lpr=74 pi=[51,74)/1 crt=66'489 lcod 66'488 unknown NOTIFY pruub 148.761917114s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 74 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=0/0 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=74) [2] r=0 lpr=74 pi=[51,74)/1 pct=0'0 crt=66'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:49 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 74 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=0/0 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=74) [2] r=0 lpr=74 pi=[51,74)/1 crt=66'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 29 11:52:50 np0005601226 podman[99591]: 2026-01-29 16:52:50.254723579 +0000 UTC m=+3.237144717 container remove 21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 11:52:50 np0005601226 systemd[1]: libpod-conmon-21513be779a3d24f271c6b703acb34a2eff5720afb34d11dc6b1a659d55cbe2f.scope: Deactivated successfully.
Jan 29 11:52:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 2 active+remapped, 2 active+recovery_wait+remapped, 1 activating+remapped, 2 remapped+peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18/252 objects misplaced (7.143%); 170 B/s, 2 objects/s recovering
Jan 29 11:52:50 np0005601226 podman[99787]: 2026-01-29 16:52:50.36594396 +0000 UTC m=+0.017826916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:52:50 np0005601226 python3.9[99779]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:52:50 np0005601226 podman[99787]: 2026-01-29 16:52:50.812783769 +0000 UTC m=+0.464666735 container create 013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:52:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 29 11:52:50 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 29 11:52:51 np0005601226 systemd[1]: Started libpod-conmon-013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1.scope.
Jan 29 11:52:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:52:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4add4219d5e2f8232b978db2fd037dfe29f4215db34da9cb397e2bc1546a77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4add4219d5e2f8232b978db2fd037dfe29f4215db34da9cb397e2bc1546a77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4add4219d5e2f8232b978db2fd037dfe29f4215db34da9cb397e2bc1546a77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4add4219d5e2f8232b978db2fd037dfe29f4215db34da9cb397e2bc1546a77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.923137634241574e-06 of space, bias 4.0, pg target 0.0023077651610898886 quantized to 16 (current 16)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.642121670899264e-06 of space, bias 1.0, pg target 0.0013926365012697794 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:52:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:52:51 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 75 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=73/75 n=6 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73) [2] r=0 lpr=73 pi=[51,73)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:51 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 75 pg[9.1b( v 66'485 (0'0,66'485] local-lis/les=74/75 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=74) [2] r=0 lpr=74 pi=[59,74)/1 crt=66'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:51 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 75 pg[9.e( v 66'489 (0'0,66'489] local-lis/les=74/75 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=74) [2] r=0 lpr=74 pi=[51,74)/1 crt=66'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:51 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 75 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=73/75 n=7 ec=51/35 lis/c=70/51 les/c/f=71/52/0 sis=73) [2] r=0 lpr=73 pi=[51,73)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:51 np0005601226 podman[99787]: 2026-01-29 16:52:51.527311724 +0000 UTC m=+1.179194700 container init 013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:52:51 np0005601226 podman[99787]: 2026-01-29 16:52:51.535753464 +0000 UTC m=+1.187636400 container start 013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:52:51 np0005601226 podman[99787]: 2026-01-29 16:52:51.60792258 +0000 UTC m=+1.259805506 container attach 013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:52:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 29 11:52:52 np0005601226 lvm[99947]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:52:52 np0005601226 lvm[99947]: VG ceph_vg0 finished
Jan 29 11:52:52 np0005601226 lvm[99949]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:52:52 np0005601226 lvm[99949]: VG ceph_vg1 finished
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 29 11:52:52 np0005601226 lvm[99951]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:52:52 np0005601226 lvm[99951]: VG ceph_vg2 finished
Jan 29 11:52:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 2 active+remapped, 2 active+recovery_wait+remapped, 1 activating+remapped, 2 remapped+peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 18/252 objects misplaced (7.143%); 170 B/s, 2 objects/s recovering
Jan 29 11:52:52 np0005601226 frosty_hugle[99814]: {}
Jan 29 11:52:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 29 11:52:52 np0005601226 systemd[1]: libpod-013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1.scope: Deactivated successfully.
Jan 29 11:52:52 np0005601226 systemd[1]: libpod-013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1.scope: Consumed 1.204s CPU time.
Jan 29 11:52:52 np0005601226 podman[99787]: 2026-01-29 16:52:52.508870266 +0000 UTC m=+2.160753202 container died 013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:52:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 29 11:52:52 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 76 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=70/71 n=7 ec=51/35 lis/c=70/62 les/c/f=71/63/0 sis=76 pruub=8.008725166s) [2] async=[2] r=-1 lpr=76 pi=[62,76)/1 crt=66'487 lcod 66'486 active pruub 153.224472046s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:52 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 76 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=70/71 n=7 ec=51/35 lis/c=70/62 les/c/f=71/63/0 sis=76 pruub=8.008638382s) [2] r=-1 lpr=76 pi=[62,76)/1 crt=66'487 lcod 66'486 unknown NOTIFY pruub 153.224472046s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:52 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 76 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=70/71 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=76 pruub=8.007658005s) [2] async=[2] r=-1 lpr=76 pi=[59,76)/1 crt=41'483 active pruub 153.224441528s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:52 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 76 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=70/71 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=76 pruub=8.007556915s) [2] r=-1 lpr=76 pi=[59,76)/1 crt=41'483 unknown NOTIFY pruub 153.224441528s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 76 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=72/73 n=6 ec=51/35 lis/c=72/51 les/c/f=73/52/0 sis=76 pruub=12.548052788s) [2] async=[2] r=-1 lpr=76 pi=[51,76)/1 crt=66'485 lcod 65'484 active pruub 153.310546875s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 76 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=72/73 n=6 ec=51/35 lis/c=72/51 les/c/f=73/52/0 sis=76 pruub=12.547932625s) [2] r=-1 lpr=76 pi=[51,76)/1 crt=66'485 lcod 65'484 unknown NOTIFY pruub 153.310546875s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 76 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=0/0 n=7 ec=51/35 lis/c=70/62 les/c/f=71/63/0 sis=76) [2] r=0 lpr=76 pi=[62,76)/1 pct=0'0 crt=66'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 76 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=0/0 n=7 ec=51/35 lis/c=70/62 les/c/f=71/63/0 sis=76) [2] r=0 lpr=76 pi=[62,76)/1 crt=66'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 76 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 76 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 76 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=72/51 les/c/f=73/52/0 sis=76) [2] r=0 lpr=76 pi=[51,76)/1 pct=0'0 crt=66'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 76 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=72/51 les/c/f=73/52/0 sis=76) [2] r=0 lpr=76 pi=[51,76)/1 crt=66'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fb4add4219d5e2f8232b978db2fd037dfe29f4215db34da9cb397e2bc1546a77-merged.mount: Deactivated successfully.
Jan 29 11:52:52 np0005601226 podman[99787]: 2026-01-29 16:52:52.857785365 +0000 UTC m=+2.509668301 container remove 013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:52:52 np0005601226 systemd[1]: libpod-conmon-013943f4c1617b7c7e524a4328fa75b781a46e95b2ef28342deaf7bc0cd7a2d1.scope: Deactivated successfully.
Jan 29 11:52:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:52:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:52:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:53 np0005601226 python3.9[100117]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:52:53 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 29 11:52:53 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 29 11:52:53 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 29 11:52:53 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 29 11:52:53 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 29 11:52:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 29 11:52:53 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 29 11:52:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:52:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 29 11:52:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 29 11:52:54 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 77 pg[9.11( v 66'487 (0'0,66'487] local-lis/les=76/77 n=7 ec=51/35 lis/c=70/62 les/c/f=71/63/0 sis=76) [2] r=0 lpr=76 pi=[62,76)/1 crt=66'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:54 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 77 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=76/77 n=6 ec=51/35 lis/c=72/51 les/c/f=73/52/0 sis=76) [2] r=0 lpr=76 pi=[51,76)/1 crt=66'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:54 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 77 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=76/77 n=6 ec=51/35 lis/c=70/59 les/c/f=71/60/0 sis=76) [2] r=0 lpr=76 pi=[59,76)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:52:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 179 B/s, 5 objects/s recovering
Jan 29 11:52:56 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 29 11:52:56 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 29 11:52:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 3 peering, 302 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 149 B/s, 4 objects/s recovering
Jan 29 11:52:56 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 29 11:52:56 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 29 11:52:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 120 B/s, 3 objects/s recovering
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 29 11:52:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 29 11:52:59 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 29 11:52:59 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=64/65 n=7 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=78 pruub=9.073786736s) [2] r=-1 lpr=78 pi=[64,78)/1 crt=66'486 lcod 66'486 active pruub 161.093887329s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=64/65 n=7 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=78 pruub=9.073732376s) [2] r=-1 lpr=78 pi=[64,78)/1 crt=66'486 lcod 66'486 unknown NOTIFY pruub 161.093887329s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=61/62 n=7 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=13.438308716s) [2] r=-1 lpr=78 pi=[61,78)/1 crt=66'484 lcod 66'484 active pruub 165.458984375s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=61/62 n=7 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=78 pruub=13.438265800s) [2] r=-1 lpr=78 pi=[61,78)/1 crt=66'484 lcod 66'484 unknown NOTIFY pruub 165.458984375s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=78 pruub=8.263761520s) [2] r=-1 lpr=78 pi=[63,78)/1 crt=66'484 lcod 66'484 active pruub 160.284652710s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=60/61 n=6 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=12.313957214s) [2] r=-1 lpr=78 pi=[60,78)/1 crt=41'483 active pruub 164.334899902s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=78 pruub=8.263699532s) [2] r=-1 lpr=78 pi=[63,78)/1 crt=66'484 lcod 66'484 unknown NOTIFY pruub 160.284652710s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:59 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 78 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=60/61 n=6 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=78 pruub=12.313810349s) [2] r=-1 lpr=78 pi=[60,78)/1 crt=41'483 unknown NOTIFY pruub 164.334899902s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:52:59 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 78 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=78) [2] r=0 lpr=78 pi=[64,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:59 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=78) [2] r=0 lpr=78 pi=[61,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:59 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=78) [2] r=0 lpr=78 pi=[60,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:59 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 78 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=78) [2] r=0 lpr=78 pi=[63,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:52:59 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 29 11:52:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 29 11:52:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 29 11:52:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 29 11:52:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 29 11:52:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 29 11:53:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 29 11:53:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[63,79)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[61,79)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[61,79)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[64,79)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[60,79)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[64,79)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 79 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=79) [2]/[0] r=-1 lpr=79 pi=[60,79)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=64/65 n=7 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=79) [2]/[0] r=0 lpr=79 pi=[64,79)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=64/65 n=7 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=79) [2]/[0] r=0 lpr=79 pi=[64,79)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=61/62 n=7 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=79) [2]/[0] r=0 lpr=79 pi=[61,79)/1 crt=66'484 lcod 66'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=79) [2]/[0] r=0 lpr=79 pi=[63,79)/1 crt=66'484 lcod 66'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=61/62 n=7 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=79) [2]/[0] r=0 lpr=79 pi=[61,79)/1 crt=66'484 lcod 66'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=79) [2]/[0] r=0 lpr=79 pi=[63,79)/1 crt=66'484 lcod 66'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=60/61 n=6 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=79) [2]/[0] r=0 lpr=79 pi=[60,79)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=60/61 n=6 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=79) [2]/[0] r=0 lpr=79 pi=[60,79)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:53:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 29 11:53:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 29 11:53:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 29 11:53:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 29 11:53:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 29 11:53:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 29 11:53:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 29 11:53:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 29 11:53:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 29 11:53:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 29 11:53:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 80 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=80 pruub=15.524225235s) [2] r=-1 lpr=80 pi=[48,80)/1 crt=36'39 lcod 0'0 active pruub 169.810394287s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 80 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=80 pruub=15.524071693s) [2] r=-1 lpr=80 pi=[48,80)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 169.810394287s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 29 11:53:01 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 80 pg[6.8( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=80) [2] r=0 lpr=80 pi=[48,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 80 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=79/80 n=7 ec=51/35 lis/c=64/64 les/c/f=65/65/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[64,79)/1 crt=66'487 lcod 66'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 80 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=51/35 lis/c=60/60 les/c/f=61/61/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[60,79)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 80 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=79/80 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[63,79)/1 crt=66'485 lcod 66'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 80 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=79/80 n=7 ec=51/35 lis/c=61/61 les/c/f=62/62/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[61,79)/1 crt=66'485 lcod 66'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 80 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=80 pruub=10.495311737s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=41'483 lcod 0'0 active pruub 160.749893188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 80 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=80 pruub=10.495089531s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 160.749893188s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 80 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=80 pruub=10.494915962s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=66'486 lcod 66'486 active pruub 160.749908447s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:02 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 80 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=80 pruub=10.494859695s) [2] r=-1 lpr=80 pi=[51,80)/1 crt=66'486 lcod 66'486 unknown NOTIFY pruub 160.749908447s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:02 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:53:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 29 11:53:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 29 11:53:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 29 11:53:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 29 11:53:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 29 11:53:02 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 29 11:53:02 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 29 11:53:03 np0005601226 systemd[1]: session-34.scope: Deactivated successfully.
Jan 29 11:53:03 np0005601226 systemd[1]: session-34.scope: Consumed 7.975s CPU time.
Jan 29 11:53:03 np0005601226 systemd-logind[823]: Session 34 logged out. Waiting for processes to exit.
Jan 29 11:53:03 np0005601226 systemd-logind[823]: Removed session 34.
Jan 29 11:53:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 29 11:53:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 29 11:53:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 29 11:53:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 29 11:53:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 81 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=79/80 n=6 ec=51/35 lis/c=79/63 les/c/f=80/64/0 sis=81 pruub=13.772690773s) [2] async=[2] r=-1 lpr=81 pi=[63,81)/1 crt=66'485 lcod 66'484 active pruub 170.574905396s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:04 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 81 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=79/80 n=6 ec=51/35 lis/c=79/63 les/c/f=80/64/0 sis=81 pruub=13.772568703s) [2] r=-1 lpr=81 pi=[63,81)/1 crt=66'485 lcod 66'484 unknown NOTIFY pruub 170.574905396s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[51,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 81 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=79/63 les/c/f=80/64/0 sis=81) [2] r=0 lpr=81 pi=[63,81)/1 pct=0'0 crt=66'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 81 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=79/63 les/c/f=80/64/0 sis=81) [2] r=0 lpr=81 pi=[63,81)/1 crt=66'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[51,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[51,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[51,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 29 11:53:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 1 active+recovering+remapped, 2 unknown, 2 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 298 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 16/238 objects misplaced (6.723%); 78 B/s, 1 objects/s recovering
Jan 29 11:53:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 81 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[51,81)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 81 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[51,81)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 81 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[51,81)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:04 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 81 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] r=0 lpr=81 pi=[51,81)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 81 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=80/81 n=1 ec=48/22 lis/c=48/48 les/c/f=49/49/0 sis=80) [2] r=0 lpr=80 pi=[48,80)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 29 11:53:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 29 11:53:05 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 82 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=79/80 n=7 ec=51/35 lis/c=79/61 les/c/f=80/62/0 sis=82 pruub=12.870154381s) [2] async=[2] r=-1 lpr=82 pi=[61,82)/1 crt=66'485 lcod 66'484 active pruub 170.575057983s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:05 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 82 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=79/80 n=7 ec=51/35 lis/c=79/61 les/c/f=80/62/0 sis=82 pruub=12.869868279s) [2] r=-1 lpr=82 pi=[61,82)/1 crt=66'485 lcod 66'484 unknown NOTIFY pruub 170.575057983s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:05 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 82 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=0/0 n=7 ec=51/35 lis/c=79/61 les/c/f=80/62/0 sis=82) [2] r=0 lpr=82 pi=[61,82)/1 pct=0'0 crt=66'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:05 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 82 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=0/0 n=7 ec=51/35 lis/c=79/61 les/c/f=80/62/0 sis=82) [2] r=0 lpr=82 pi=[61,82)/1 crt=66'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:05 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 82 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=11.164827347s) [0] r=-1 lpr=82 pi=[55,82)/1 crt=36'39 lcod 0'0 active pruub 164.624755859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:05 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 82 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=55/56 n=1 ec=48/22 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=11.164792061s) [0] r=-1 lpr=82 pi=[55,82)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 164.624755859s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:05 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 82 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=55/55 les/c/f=56/56/0 sis=82) [0] r=0 lpr=82 pi=[55,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:05 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 82 pg[9.17( v 66'485 (0'0,66'485] local-lis/les=81/82 n=6 ec=51/35 lis/c=79/63 les/c/f=80/64/0 sis=81) [2] r=0 lpr=81 pi=[63,81)/1 crt=66'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:05 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 82 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=81/82 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[51,81)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:05 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 82 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=81/82 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[51,81)/1 crt=66'487 lcod 66'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 29 11:53:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 29 11:53:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 1 active+recovering+remapped, 2 unknown, 2 active+recovery_wait+remapped, 1 active+remapped, 1 peering, 298 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 16/238 objects misplaced (6.723%); 69 B/s, 1 objects/s recovering
Jan 29 11:53:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 29 11:53:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 29 11:53:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 29 11:53:06 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 83 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=79/60 les/c/f=80/61/0 sis=83) [2] r=0 lpr=83 pi=[60,83)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:06 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 83 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=79/60 les/c/f=80/61/0 sis=83) [2] r=0 lpr=83 pi=[60,83)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:06 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 83 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=51/35 lis/c=79/60 les/c/f=80/61/0 sis=83 pruub=11.252933502s) [2] async=[2] r=-1 lpr=83 pi=[60,83)/1 crt=41'483 active pruub 170.574829102s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:06 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 83 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=51/35 lis/c=79/60 les/c/f=80/61/0 sis=83 pruub=11.252748489s) [2] r=-1 lpr=83 pi=[60,83)/1 crt=41'483 unknown NOTIFY pruub 170.574829102s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:06 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 83 pg[9.f( v 66'485 (0'0,66'485] local-lis/les=82/83 n=7 ec=51/35 lis/c=79/61 les/c/f=80/62/0 sis=82) [2] r=0 lpr=82 pi=[61,82)/1 crt=66'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:06 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 83 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=82/83 n=1 ec=48/22 lis/c=55/55 les/c/f=56/56/0 sis=82) [0] r=0 lpr=82 pi=[55,82)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 29 11:53:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 29 11:53:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 29 11:53:07 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 84 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=84) [2] r=0 lpr=84 pi=[51,84)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:07 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 84 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=84) [2] r=0 lpr=84 pi=[51,84)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:07 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 84 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=79/80 n=7 ec=51/35 lis/c=79/64 les/c/f=80/65/0 sis=84 pruub=9.964828491s) [2] async=[2] r=-1 lpr=84 pi=[64,84)/1 crt=66'487 lcod 66'486 active pruub 170.574722290s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:07 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 84 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=79/80 n=7 ec=51/35 lis/c=79/64 les/c/f=80/65/0 sis=84 pruub=9.964656830s) [2] r=-1 lpr=84 pi=[64,84)/1 crt=66'487 lcod 66'486 unknown NOTIFY pruub 170.574722290s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:07 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 84 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=81/82 n=7 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=84 pruub=13.633255005s) [2] async=[2] r=-1 lpr=84 pi=[51,84)/1 crt=41'483 lcod 0'0 active pruub 169.780746460s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:07 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 84 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=81/82 n=7 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=84 pruub=13.633161545s) [2] r=-1 lpr=84 pi=[51,84)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 169.780746460s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:07 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 84 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=0/0 n=7 ec=51/35 lis/c=79/64 les/c/f=80/65/0 sis=84) [2] r=0 lpr=84 pi=[64,84)/1 pct=0'0 crt=66'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:07 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 84 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=0/0 n=7 ec=51/35 lis/c=79/64 les/c/f=80/65/0 sis=84) [2] r=0 lpr=84 pi=[64,84)/1 crt=66'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:08 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 84 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=83/84 n=6 ec=51/35 lis/c=79/60 les/c/f=80/61/0 sis=83) [2] r=0 lpr=83 pi=[60,83)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 1 unknown, 2 peering, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 454 B/s wr, 9 op/s; 165 B/s, 3 objects/s recovering
Jan 29 11:53:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 29 11:53:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 29 11:53:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 29 11:53:08 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 85 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=85) [2] r=0 lpr=85 pi=[51,85)/1 pct=0'0 crt=66'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:08 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 85 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=85) [2] r=0 lpr=85 pi=[51,85)/1 crt=66'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:08 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 85 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=81/82 n=6 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=85 pruub=12.621656418s) [2] async=[2] r=-1 lpr=85 pi=[51,85)/1 crt=66'487 lcod 66'486 active pruub 169.780822754s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:08 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 85 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=81/82 n=6 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=85 pruub=12.621582031s) [2] r=-1 lpr=85 pi=[51,85)/1 crt=66'487 lcod 66'486 unknown NOTIFY pruub 169.780822754s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:08 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 85 pg[9.7( v 66'487 (0'0,66'487] local-lis/les=84/85 n=7 ec=51/35 lis/c=79/64 les/c/f=80/65/0 sis=84) [2] r=0 lpr=84 pi=[64,84)/1 crt=66'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:08 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=84/85 n=7 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=84) [2] r=0 lpr=84 pi=[51,84)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 29 11:53:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 29 11:53:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 29 11:53:09 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 29 11:53:09 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 29 11:53:10 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 86 pg[9.18( v 66'487 (0'0,66'487] local-lis/les=85/86 n=6 ec=51/35 lis/c=81/51 les/c/f=82/52/0 sis=85) [2] r=0 lpr=85 pi=[51,85)/1 crt=66'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 1 active+remapped, 2 peering, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1023 B/s wr, 26 op/s; 290 B/s, 5 objects/s recovering
Jan 29 11:53:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:53:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:53:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:53:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:53:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:53:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:53:11 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 29 11:53:11 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 29 11:53:11 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 29 11:53:11 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 29 11:53:12 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 29 11:53:12 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 29 11:53:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 1 active+remapped, 2 peering, 302 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 699 B/s wr, 17 op/s; 198 B/s, 3 objects/s recovering
Jan 29 11:53:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 29 11:53:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 473 B/s wr, 15 op/s; 196 B/s, 4 objects/s recovering
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 29 11:53:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 29 11:53:15 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 29 11:53:15 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 29 11:53:15 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 29 11:53:15 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 29 11:53:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 29 11:53:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 29 11:53:16 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 87 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=48/22 lis/c=56/56 les/c/f=57/57/0 sis=87 pruub=12.641985893s) [0] r=-1 lpr=87 pi=[56,87)/1 crt=36'39 lcod 0'0 active pruub 177.177490234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:16 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 87 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=48/22 lis/c=56/56 les/c/f=57/57/0 sis=87 pruub=12.641846657s) [0] r=-1 lpr=87 pi=[56,87)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 177.177490234s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:16 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 87 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=56/56 les/c/f=57/57/0 sis=87) [0] r=0 lpr=87 pi=[56,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 29 11:53:16 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 29 11:53:16 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 88 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=67/69 n=1 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=88 pruub=12.256381989s) [1] r=-1 lpr=88 pi=[67,88)/1 crt=36'39 active pruub 181.855255127s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:16 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 88 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=67/69 n=1 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=88 pruub=12.256324768s) [1] r=-1 lpr=88 pi=[67,88)/1 crt=36'39 unknown NOTIFY pruub 181.855255127s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:16 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 88 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=87/88 n=1 ec=48/22 lis/c=56/56 les/c/f=57/57/0 sis=87) [0] r=0 lpr=87 pi=[56,87)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:16 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 88 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 29 11:53:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 29 11:53:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 29 11:53:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 29 11:53:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 29 11:53:17 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 89 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=88/89 n=1 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:18 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 29 11:53:18 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 29 11:53:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 29 11:53:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 29 11:53:19 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 29 11:53:19 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 90 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=90 pruub=9.455309868s) [2] r=-1 lpr=90 pi=[51,90)/1 crt=41'483 lcod 0'0 active pruub 176.750213623s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:19 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 90 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=90 pruub=9.455004692s) [2] r=-1 lpr=90 pi=[51,90)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 176.750213623s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:19 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 90 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=90 pruub=9.454128265s) [2] r=-1 lpr=90 pi=[51,90)/1 crt=66'486 lcod 66'486 active pruub 176.750381470s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:19 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 90 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=90 pruub=9.454052925s) [2] r=-1 lpr=90 pi=[51,90)/1 crt=66'486 lcod 66'486 unknown NOTIFY pruub 176.750381470s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:19 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 90 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=90) [2] r=0 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:19 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 90 pg[9.c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=90) [2] r=0 lpr=90 pi=[51,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 29 11:53:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 29 11:53:19 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 29 11:53:19 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 29 11:53:19 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 29 11:53:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 29 11:53:19 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 29 11:53:20 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 29 11:53:20 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 29 11:53:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 29 11:53:20 np0005601226 systemd-logind[823]: New session 35 of user zuul.
Jan 29 11:53:20 np0005601226 systemd[1]: Started Session 35 of User zuul.
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 29 11:53:20 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 91 pg[9.c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=-1 lpr=91 pi=[51,91)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:20 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 91 pg[9.c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=-1 lpr=91 pi=[51,91)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:20 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 91 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=-1 lpr=91 pi=[51,91)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:20 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 91 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=-1 lpr=91 pi=[51,91)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 29 11:53:20 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 29 11:53:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 91 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=0 lpr=91 pi=[51,91)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=0 lpr=91 pi=[51,91)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 91 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=0 lpr=91 pi=[51,91)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] r=0 lpr=91 pi=[51,91)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:21 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 29 11:53:21 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 29 11:53:21 np0005601226 python3.9[100355]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 29 11:53:21 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 92 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=70/71 n=1 ec=48/22 lis/c=70/70 les/c/f=71/71/0 sis=92 pruub=10.774728775s) [1] r=-1 lpr=92 pi=[70,92)/1 crt=36'39 active pruub 184.932174683s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:21 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 92 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=70/71 n=1 ec=48/22 lis/c=70/70 les/c/f=71/71/0 sis=92 pruub=10.774679184s) [1] r=-1 lpr=92 pi=[70,92)/1 crt=36'39 unknown NOTIFY pruub 184.932174683s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:21 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 92 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=70/70 les/c/f=71/71/0 sis=92) [1] r=0 lpr=92 pi=[70,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 29 11:53:21 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 29 11:53:21 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 29 11:53:21 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 29 11:53:22 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 29 11:53:22 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 29 11:53:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 92 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=91/92 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] async=[2] r=0 lpr=91 pi=[51,91)/1 crt=66'487 lcod 66'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 92 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=91/92 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=91) [2]/[1] async=[2] r=0 lpr=91 pi=[51,91)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:22 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 29 11:53:22 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 29 11:53:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 29 11:53:22 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 93 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=93) [2] r=0 lpr=93 pi=[51,93)/1 pct=0'0 crt=66'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:22 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 93 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=93) [2] r=0 lpr=93 pi=[51,93)/1 crt=66'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 93 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=91/92 n=6 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=93 pruub=15.866858482s) [2] async=[2] r=-1 lpr=93 pi=[51,93)/1 crt=66'487 lcod 66'486 active pruub 186.528533936s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 93 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=91/92 n=6 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=93 pruub=15.866689682s) [2] r=-1 lpr=93 pi=[51,93)/1 crt=66'487 lcod 66'486 unknown NOTIFY pruub 186.528533936s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:22 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 93 pg[6.d( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=92/93 n=1 ec=48/22 lis/c=70/70 les/c/f=71/71/0 sis=92) [1] r=0 lpr=92 pi=[70,92)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:22 np0005601226 python3.9[100529]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 29 11:53:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 29 11:53:22 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 29 11:53:22 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 29 11:53:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 29 11:53:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 29 11:53:23 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 29 11:53:23 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 94 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=94) [2] r=0 lpr=94 pi=[51,94)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:23 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 94 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=94) [2] r=0 lpr=94 pi=[51,94)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:23 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 94 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=91/92 n=7 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=94 pruub=14.817914963s) [2] async=[2] r=-1 lpr=94 pi=[51,94)/1 crt=41'483 lcod 0'0 active pruub 186.528762817s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:23 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 94 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=91/92 n=7 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=94 pruub=14.817788124s) [2] r=-1 lpr=94 pi=[51,94)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 186.528762817s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:23 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 94 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=93/94 n=6 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=93) [2] r=0 lpr=93 pi=[51,93)/1 crt=66'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:23 np0005601226 python3.9[100685]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:53:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:24 np0005601226 python3.9[100838]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:53:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 2 objects/s recovering
Jan 29 11:53:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 29 11:53:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 29 11:53:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 29 11:53:24 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 95 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=94/95 n=7 ec=51/35 lis/c=91/51 les/c/f=92/52/0 sis=94) [2] r=0 lpr=94 pi=[51,94)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:24 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 29 11:53:24 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 29 11:53:25 np0005601226 python3.9[100992]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:53:25 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 29 11:53:25 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 29 11:53:25 np0005601226 python3.9[101144]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:53:25 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 29 11:53:25 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 29 11:53:26 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 29 11:53:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 98 B/s, 2 objects/s recovering
Jan 29 11:53:26 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 29 11:53:26 np0005601226 python3.9[101294]: ansible-ansible.builtin.service_facts Invoked
Jan 29 11:53:26 np0005601226 network[101311]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 11:53:26 np0005601226 network[101312]: 'network-scripts' will be removed from distribution in near future.
Jan 29 11:53:26 np0005601226 network[101313]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 11:53:26 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 29 11:53:26 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 29 11:53:28 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 29 11:53:28 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 29 11:53:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 2 objects/s recovering
Jan 29 11:53:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 29 11:53:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 29 11:53:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 29 11:53:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 29 11:53:28 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 29 11:53:28 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 29 11:53:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 29 11:53:29 np0005601226 python3.9[101573]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 29 11:53:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 29 11:53:29 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 96 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=67/69 n=1 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=96 pruub=15.661940575s) [2] r=-1 lpr=96 pi=[67,96)/1 crt=36'39 active pruub 197.855422974s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:29 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 96 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=67/69 n=1 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=96 pruub=15.661829948s) [2] r=-1 lpr=96 pi=[67,96)/1 crt=36'39 unknown NOTIFY pruub 197.855422974s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:29 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 96 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=96) [2] r=0 lpr=96 pi=[67,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:30 np0005601226 python3.9[101723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:53:30 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 29 11:53:30 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 29 11:53:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 29 11:53:30 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 97 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=96/97 n=1 ec=48/22 lis/c=67/67 les/c/f=69/69/0 sis=96) [2] r=0 lpr=96 pi=[67,96)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 29 11:53:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 29 11:53:31 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 29 11:53:31 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 29 11:53:31 np0005601226 python3.9[101877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:53:31 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 29 11:53:31 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 29 11:53:31 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 29 11:53:32 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 29 11:53:32 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 29 11:53:32 np0005601226 python3.9[102035]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:53:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 29 11:53:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 29 11:53:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 29 11:53:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 29 11:53:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 29 11:53:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 29 11:53:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 29 11:53:32 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 29 11:53:33 np0005601226 python3.9[102119]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:53:33 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 29 11:53:33 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 29 11:53:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:34 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 29 11:53:34 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 29 11:53:34 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 29 11:53:34 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 29 11:53:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 29 11:53:34 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 29 11:53:35 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 29 11:53:35 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 29 11:53:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 29 11:53:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 29 11:53:35 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 29 11:53:36 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 29 11:53:36 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 29 11:53:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 29 11:53:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 29 11:53:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 29 11:53:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 29 11:53:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 29 11:53:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 29 11:53:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 29 11:53:36 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 100 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=100 pruub=10.946701050s) [2] r=-1 lpr=100 pi=[63,100)/1 crt=64'484 lcod 64'484 active pruub 200.285812378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:36 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 100 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=100 pruub=10.946658134s) [2] r=-1 lpr=100 pi=[63,100)/1 crt=64'484 lcod 64'484 unknown NOTIFY pruub 200.285812378s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:36 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 100 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=100) [2] r=0 lpr=100 pi=[63,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 29 11:53:37 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 29 11:53:37 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 29 11:53:37 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 29 11:53:37 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 29 11:53:37 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 29 11:53:37 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 29 11:53:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 29 11:53:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 29 11:53:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 29 11:53:37 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 101 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=101) [2]/[0] r=0 lpr=101 pi=[63,101)/1 crt=64'484 lcod 64'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:37 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 101 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=63/64 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=101) [2]/[0] r=0 lpr=101 pi=[63,101)/1 crt=64'484 lcod 64'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:37 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 101 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[63,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:37 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 101 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[63,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:37 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 29 11:53:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 105 B/s, 0 objects/s recovering
Jan 29 11:53:38 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 29 11:53:38 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 29 11:53:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 29 11:53:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 29 11:53:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 29 11:53:38 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 102 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=101/102 n=6 ec=51/35 lis/c=63/63 les/c/f=64/64/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[63,101)/1 crt=66'485 lcod 64'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 29 11:53:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 29 11:53:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 29 11:53:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 29 11:53:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 29 11:53:39 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 103 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=101/102 n=6 ec=51/35 lis/c=101/63 les/c/f=102/64/0 sis=103 pruub=15.341964722s) [2] async=[2] r=-1 lpr=103 pi=[63,103)/1 crt=66'485 lcod 64'484 active pruub 207.381805420s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:39 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 103 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=101/102 n=6 ec=51/35 lis/c=101/63 les/c/f=102/64/0 sis=103 pruub=15.341871262s) [2] r=-1 lpr=103 pi=[63,103)/1 crt=66'485 lcod 64'484 unknown NOTIFY pruub 207.381805420s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:39 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 103 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=101/63 les/c/f=102/64/0 sis=103) [2] r=0 lpr=103 pi=[63,103)/1 pct=0'0 crt=66'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:39 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 103 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=101/63 les/c/f=102/64/0 sis=103) [2] r=0 lpr=103 pi=[63,103)/1 crt=66'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 29 11:53:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 29 11:53:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 29 11:53:40 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 104 pg[9.13( v 66'485 (0'0,66'485] local-lis/les=103/104 n=6 ec=51/35 lis/c=101/63 les/c/f=102/64/0 sis=103) [2] r=0 lpr=103 pi=[63,103)/1 crt=66'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:53:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 29 11:53:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:53:40
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:53:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:53:41 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 29 11:53:41 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 29 11:53:42 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 29 11:53:42 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 29 11:53:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:53:43 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 29 11:53:43 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:44 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 29 11:53:44 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 29 11:53:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 0 objects/s recovering
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 29 11:53:44 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 29 11:53:44 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 29 11:53:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 29 11:53:45 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 29 11:53:45 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 29 11:53:45 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 29 11:53:45 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 29 11:53:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 29 11:53:45 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 29 11:53:46 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 29 11:53:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 0 objects/s recovering
Jan 29 11:53:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 29 11:53:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 29 11:53:46 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 29 11:53:46 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 29 11:53:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 29 11:53:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 29 11:53:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 29 11:53:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 29 11:53:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 29 11:53:47 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 29 11:53:47 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 29 11:53:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 29 11:53:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 106 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=106 pruub=8.617977142s) [1] r=-1 lpr=106 pi=[58,106)/1 crt=41'483 active pruub 209.541168213s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 106 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=106 pruub=8.617943764s) [1] r=-1 lpr=106 pi=[58,106)/1 crt=41'483 unknown NOTIFY pruub 209.541168213s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 106 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=106) [1] r=0 lpr=106 pi=[58,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 0 objects/s recovering
Jan 29 11:53:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 29 11:53:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 29 11:53:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 29 11:53:48 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 29 11:53:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 29 11:53:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 29 11:53:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 29 11:53:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 107 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=107) [1]/[0] r=0 lpr=107 pi=[58,107)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:48 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 107 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=107) [1]/[0] r=0 lpr=107 pi=[58,107)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=107) [1]/[0] r=-1 lpr=107 pi=[58,107)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:48 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 107 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=107) [1]/[0] r=-1 lpr=107 pi=[58,107)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:49 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 29 11:53:49 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 29 11:53:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 29 11:53:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 29 11:53:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 29 11:53:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 29 11:53:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 29 11:53:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 29 11:53:50 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=107/108 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=107) [1]/[0] async=[1] r=0 lpr=107 pi=[58,107)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:50 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 107 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=73/75 n=6 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=107 pruub=13.365458488s) [0] r=-1 lpr=107 pi=[73,107)/1 crt=41'483 active pruub 207.311126709s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:50 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 108 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=73/75 n=6 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=107 pruub=13.365414619s) [0] r=-1 lpr=107 pi=[73,107)/1 crt=41'483 unknown NOTIFY pruub 207.311126709s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:50 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 108 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=107) [0] r=0 lpr=108 pi=[73,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:50 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 29 11:53:50 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 29 11:53:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:53:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7713644995605247e-06 of space, bias 4.0, pg target 0.0021256373994726296 quantized to 16 (current 16)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:53:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:53:51 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 29 11:53:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 29 11:53:51 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 29 11:53:51 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 109 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=73/75 n=6 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=109) [0]/[2] r=0 lpr=109 pi=[73,109)/2 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:51 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 109 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=73/75 n=6 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=109) [0]/[2] r=0 lpr=109 pi=[73,109)/2 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 29 11:53:51 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[73,109)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:51 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[73,109)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:51 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 109 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=107/108 n=6 ec=51/35 lis/c=107/58 les/c/f=108/59/0 sis=109 pruub=14.683645248s) [1] async=[1] r=-1 lpr=109 pi=[58,109)/1 crt=41'483 active pruub 218.744110107s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:51 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 109 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=107/108 n=6 ec=51/35 lis/c=107/58 les/c/f=108/59/0 sis=109 pruub=14.683535576s) [1] r=-1 lpr=109 pi=[58,109)/1 crt=41'483 unknown NOTIFY pruub 218.744110107s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:51 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 109 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=107/58 les/c/f=108/59/0 sis=109) [1] r=0 lpr=109 pi=[58,109)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:51 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 109 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=107/58 les/c/f=108/59/0 sis=109) [1] r=0 lpr=109 pi=[58,109)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 29 11:53:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 29 11:53:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 29 11:53:52 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 110 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=109/110 n=6 ec=51/35 lis/c=107/58 les/c/f=108/59/0 sis=109) [1] r=0 lpr=109 pi=[58,109)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:52 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=109/110 n=6 ec=51/35 lis/c=73/73 les/c/f=75/75/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[73,109)/2 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 29 11:53:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 111 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=109/110 n=6 ec=51/35 lis/c=109/73 les/c/f=110/75/0 sis=111 pruub=14.936233521s) [0] async=[0] r=-1 lpr=111 pi=[73,111)/2 crt=41'483 active pruub 212.199020386s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:53 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 111 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=109/110 n=6 ec=51/35 lis/c=109/73 les/c/f=110/75/0 sis=111 pruub=14.936170578s) [0] r=-1 lpr=111 pi=[73,111)/2 crt=41'483 unknown NOTIFY pruub 212.199020386s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:53:53 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 111 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=109/73 les/c/f=110/75/0 sis=111) [0] r=0 lpr=111 pi=[73,111)/2 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:53:53 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 111 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=109/73 les/c/f=110/75/0 sis=111) [0] r=0 lpr=111 pi=[73,111)/2 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:53:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:53:54 np0005601226 podman[102382]: 2026-01-29 16:53:54.00770735 +0000 UTC m=+0.037385004 container create 7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:53:54 np0005601226 systemd[1]: Started libpod-conmon-7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758.scope.
Jan 29 11:53:54 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:53:54 np0005601226 podman[102382]: 2026-01-29 16:53:54.078396235 +0000 UTC m=+0.108073889 container init 7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:53:54 np0005601226 podman[102382]: 2026-01-29 16:53:54.084991945 +0000 UTC m=+0.114669579 container start 7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 11:53:54 np0005601226 podman[102382]: 2026-01-29 16:53:53.99050943 +0000 UTC m=+0.020187084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:53:54 np0005601226 frosty_easley[102399]: 167 167
Jan 29 11:53:54 np0005601226 systemd[1]: libpod-7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758.scope: Deactivated successfully.
Jan 29 11:53:54 np0005601226 podman[102382]: 2026-01-29 16:53:54.095908614 +0000 UTC m=+0.125586268 container attach 7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:53:54 np0005601226 podman[102382]: 2026-01-29 16:53:54.096462349 +0000 UTC m=+0.126139983 container died 7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:53:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d4f4a9757f91b7987cc214715a150671eb88e8da54780c1935e4361406297836-merged.mount: Deactivated successfully.
Jan 29 11:53:54 np0005601226 podman[102382]: 2026-01-29 16:53:54.153176651 +0000 UTC m=+0.182854285 container remove 7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_easley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 11:53:54 np0005601226 systemd[1]: libpod-conmon-7fb611c78f045298cfaaee8eec65af7ac4ee89f7fcf98404fb4e27b80f663758.scope: Deactivated successfully.
Jan 29 11:53:54 np0005601226 podman[102422]: 2026-01-29 16:53:54.276621579 +0000 UTC m=+0.054296667 container create df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sammet, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:54 np0005601226 systemd[1]: Started libpod-conmon-df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa.scope.
Jan 29 11:53:54 np0005601226 podman[102422]: 2026-01-29 16:53:54.249324882 +0000 UTC m=+0.026999990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:53:54 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:53:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7630a33947b34ce187ed1765b0d588a25a8a8083c8b100a7da341dbca6854b84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7630a33947b34ce187ed1765b0d588a25a8a8083c8b100a7da341dbca6854b84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7630a33947b34ce187ed1765b0d588a25a8a8083c8b100a7da341dbca6854b84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7630a33947b34ce187ed1765b0d588a25a8a8083c8b100a7da341dbca6854b84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7630a33947b34ce187ed1765b0d588a25a8a8083c8b100a7da341dbca6854b84/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:54 np0005601226 podman[102422]: 2026-01-29 16:53:54.362739946 +0000 UTC m=+0.140415054 container init df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:53:54 np0005601226 podman[102422]: 2026-01-29 16:53:54.367396793 +0000 UTC m=+0.145071871 container start df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 11:53:54 np0005601226 podman[102422]: 2026-01-29 16:53:54.375229688 +0000 UTC m=+0.152904806 container attach df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 11:53:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 active+remapped, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 228 B/s wr, 9 op/s; 73 B/s, 2 objects/s recovering
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:53:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 29 11:53:54 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 112 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=111/112 n=6 ec=51/35 lis/c=109/73 les/c/f=110/75/0 sis=111) [0] r=0 lpr=111 pi=[73,111)/2 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:53:54 np0005601226 distracted_sammet[102439]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:53:54 np0005601226 distracted_sammet[102439]: --> All data devices are unavailable
Jan 29 11:53:54 np0005601226 systemd[1]: libpod-df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa.scope: Deactivated successfully.
Jan 29 11:53:54 np0005601226 podman[102422]: 2026-01-29 16:53:54.796810783 +0000 UTC m=+0.574485891 container died df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sammet, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:53:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7630a33947b34ce187ed1765b0d588a25a8a8083c8b100a7da341dbca6854b84-merged.mount: Deactivated successfully.
Jan 29 11:53:54 np0005601226 podman[102422]: 2026-01-29 16:53:54.844101227 +0000 UTC m=+0.621776325 container remove df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:53:54 np0005601226 systemd[1]: libpod-conmon-df9c317b07406d74402d8f95eba56585a9c068551b14d1266f7b88d86ac430fa.scope: Deactivated successfully.
Jan 29 11:53:55 np0005601226 podman[102532]: 2026-01-29 16:53:55.229826452 +0000 UTC m=+0.043005408 container create c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 11:53:55 np0005601226 systemd[1]: Started libpod-conmon-c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a.scope.
Jan 29 11:53:55 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 29 11:53:55 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 29 11:53:55 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:53:55 np0005601226 podman[102532]: 2026-01-29 16:53:55.205806715 +0000 UTC m=+0.018985721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:53:55 np0005601226 podman[102532]: 2026-01-29 16:53:55.310474949 +0000 UTC m=+0.123653925 container init c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:53:55 np0005601226 podman[102532]: 2026-01-29 16:53:55.316057801 +0000 UTC m=+0.129236757 container start c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:53:55 np0005601226 quizzical_wright[102549]: 167 167
Jan 29 11:53:55 np0005601226 systemd[1]: libpod-c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a.scope: Deactivated successfully.
Jan 29 11:53:55 np0005601226 podman[102532]: 2026-01-29 16:53:55.320811552 +0000 UTC m=+0.133990528 container attach c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 11:53:55 np0005601226 podman[102532]: 2026-01-29 16:53:55.320968496 +0000 UTC m=+0.134147452 container died c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 11:53:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay-56377b2d6cdf9171cdeaf751a45aa2883b4f05ce5af83228f0f39e8a6274a329-merged.mount: Deactivated successfully.
Jan 29 11:53:55 np0005601226 podman[102532]: 2026-01-29 16:53:55.387122646 +0000 UTC m=+0.200301612 container remove c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:53:55 np0005601226 systemd[1]: libpod-conmon-c3ac09d4586847c0a5f2048909fa70256b095679ee3adaedfae7a92d419be58a.scope: Deactivated successfully.
Jan 29 11:53:55 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 29 11:53:55 np0005601226 podman[102575]: 2026-01-29 16:53:55.531213829 +0000 UTC m=+0.052810816 container create b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:53:55 np0005601226 systemd[1]: Started libpod-conmon-b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce.scope.
Jan 29 11:53:55 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:53:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee0695dee42fd8b1da2acbec35b1bca08fbd7f58989b457b5d5649463114956/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee0695dee42fd8b1da2acbec35b1bca08fbd7f58989b457b5d5649463114956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee0695dee42fd8b1da2acbec35b1bca08fbd7f58989b457b5d5649463114956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee0695dee42fd8b1da2acbec35b1bca08fbd7f58989b457b5d5649463114956/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:55 np0005601226 podman[102575]: 2026-01-29 16:53:55.50348448 +0000 UTC m=+0.025081507 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:53:55 np0005601226 podman[102575]: 2026-01-29 16:53:55.616719099 +0000 UTC m=+0.138316146 container init b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:53:55 np0005601226 podman[102575]: 2026-01-29 16:53:55.62114962 +0000 UTC m=+0.142746657 container start b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 11:53:55 np0005601226 podman[102575]: 2026-01-29 16:53:55.625798178 +0000 UTC m=+0.147395215 container attach b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]: {
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:    "0": [
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:        {
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "devices": [
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "/dev/loop3"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            ],
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_name": "ceph_lv0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_size": "21470642176",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "name": "ceph_lv0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "tags": {
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cluster_name": "ceph",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.crush_device_class": "",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.encrypted": "0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.objectstore": "bluestore",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osd_id": "0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.type": "block",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.vdo": "0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.with_tpm": "0"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            },
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "type": "block",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "vg_name": "ceph_vg0"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:        }
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:    ],
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:    "1": [
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:        {
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "devices": [
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "/dev/loop4"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            ],
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_name": "ceph_lv1",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_size": "21470642176",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "name": "ceph_lv1",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "tags": {
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cluster_name": "ceph",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.crush_device_class": "",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.encrypted": "0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.objectstore": "bluestore",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osd_id": "1",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.type": "block",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.vdo": "0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.with_tpm": "0"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            },
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "type": "block",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "vg_name": "ceph_vg1"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:        }
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:    ],
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:    "2": [
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:        {
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "devices": [
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "/dev/loop5"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            ],
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_name": "ceph_lv2",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_size": "21470642176",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "name": "ceph_lv2",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "tags": {
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.cluster_name": "ceph",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.crush_device_class": "",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.encrypted": "0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.objectstore": "bluestore",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osd_id": "2",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.type": "block",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.vdo": "0",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:                "ceph.with_tpm": "0"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            },
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "type": "block",
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:            "vg_name": "ceph_vg2"
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:        }
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]:    ]
Jan 29 11:53:55 np0005601226 hungry_rubin[102591]: }
Jan 29 11:53:55 np0005601226 systemd[1]: libpod-b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce.scope: Deactivated successfully.
Jan 29 11:53:55 np0005601226 podman[102600]: 2026-01-29 16:53:55.915647559 +0000 UTC m=+0.021495300 container died b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:53:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9ee0695dee42fd8b1da2acbec35b1bca08fbd7f58989b457b5d5649463114956-merged.mount: Deactivated successfully.
Jan 29 11:53:55 np0005601226 podman[102600]: 2026-01-29 16:53:55.964440554 +0000 UTC m=+0.070288275 container remove b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rubin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 11:53:55 np0005601226 systemd[1]: libpod-conmon-b156365581b08ebfc634447143309000782bc0a5ea829aba8da2d892c3513cce.scope: Deactivated successfully.
Jan 29 11:53:56 np0005601226 podman[102677]: 2026-01-29 16:53:56.380772516 +0000 UTC m=+0.050764490 container create 69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:53:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 1 active+remapped, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 201 B/s wr, 8 op/s; 65 B/s, 2 objects/s recovering
Jan 29 11:53:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 29 11:53:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 29 11:53:56 np0005601226 systemd[1]: Started libpod-conmon-69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83.scope.
Jan 29 11:53:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 29 11:53:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 29 11:53:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 29 11:53:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 29 11:53:56 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:53:56 np0005601226 podman[102677]: 2026-01-29 16:53:56.349383488 +0000 UTC m=+0.019375502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:53:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 29 11:53:56 np0005601226 podman[102677]: 2026-01-29 16:53:56.46057201 +0000 UTC m=+0.130564014 container init 69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dhawan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 11:53:56 np0005601226 podman[102677]: 2026-01-29 16:53:56.466002429 +0000 UTC m=+0.135994413 container start 69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:53:56 np0005601226 systemd[1]: libpod-69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83.scope: Deactivated successfully.
Jan 29 11:53:56 np0005601226 funny_dhawan[102694]: 167 167
Jan 29 11:53:56 np0005601226 conmon[102694]: conmon 69cfaf46c317ffe05f65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83.scope/container/memory.events
Jan 29 11:53:56 np0005601226 podman[102677]: 2026-01-29 16:53:56.470355008 +0000 UTC m=+0.140347022 container attach 69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 11:53:56 np0005601226 podman[102677]: 2026-01-29 16:53:56.471037336 +0000 UTC m=+0.141029320 container died 69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dhawan, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 11:53:56 np0005601226 systemd[1]: var-lib-containers-storage-overlay-32a1b31f9f84f93dca8c3864ba9d6afc83bad7ca4977c73e06114691ff51dec0-merged.mount: Deactivated successfully.
Jan 29 11:53:56 np0005601226 podman[102677]: 2026-01-29 16:53:56.517135867 +0000 UTC m=+0.187127851 container remove 69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dhawan, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 11:53:56 np0005601226 systemd[1]: libpod-conmon-69cfaf46c317ffe05f6541457a8ec1c5068f4c62637691d84330ebf2ad82fd83.scope: Deactivated successfully.
Jan 29 11:53:56 np0005601226 podman[102717]: 2026-01-29 16:53:56.636163074 +0000 UTC m=+0.050142432 container create 5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dhawan, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:53:56 np0005601226 systemd[1]: Started libpod-conmon-5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37.scope.
Jan 29 11:53:56 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:53:56 np0005601226 podman[102717]: 2026-01-29 16:53:56.603034588 +0000 UTC m=+0.017013966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:53:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70283996270d91a925a1024b9397e24d2273a34aded1b4aa635e81aa5ad28e51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70283996270d91a925a1024b9397e24d2273a34aded1b4aa635e81aa5ad28e51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70283996270d91a925a1024b9397e24d2273a34aded1b4aa635e81aa5ad28e51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70283996270d91a925a1024b9397e24d2273a34aded1b4aa635e81aa5ad28e51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:53:56 np0005601226 podman[102717]: 2026-01-29 16:53:56.724274745 +0000 UTC m=+0.138254123 container init 5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dhawan, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:53:56 np0005601226 podman[102717]: 2026-01-29 16:53:56.728980015 +0000 UTC m=+0.142959373 container start 5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dhawan, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 11:53:56 np0005601226 podman[102717]: 2026-01-29 16:53:56.747121751 +0000 UTC m=+0.161101109 container attach 5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dhawan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 11:53:57 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 29 11:53:57 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 29 11:53:57 np0005601226 lvm[102811]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:53:57 np0005601226 lvm[102811]: VG ceph_vg0 finished
Jan 29 11:53:57 np0005601226 lvm[102814]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:53:57 np0005601226 lvm[102814]: VG ceph_vg1 finished
Jan 29 11:53:57 np0005601226 lvm[102816]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:53:57 np0005601226 lvm[102816]: VG ceph_vg2 finished
Jan 29 11:53:57 np0005601226 lvm[102817]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:53:57 np0005601226 lvm[102817]: VG ceph_vg0 finished
Jan 29 11:53:57 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 29 11:53:57 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 29 11:53:57 np0005601226 relaxed_dhawan[102734]: {}
Jan 29 11:53:57 np0005601226 systemd[1]: libpod-5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37.scope: Deactivated successfully.
Jan 29 11:53:57 np0005601226 podman[102717]: 2026-01-29 16:53:57.484138538 +0000 UTC m=+0.898117916 container died 5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dhawan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 11:53:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 29 11:53:57 np0005601226 systemd[1]: var-lib-containers-storage-overlay-70283996270d91a925a1024b9397e24d2273a34aded1b4aa635e81aa5ad28e51-merged.mount: Deactivated successfully.
Jan 29 11:53:57 np0005601226 podman[102717]: 2026-01-29 16:53:57.811310951 +0000 UTC m=+1.225290339 container remove 5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 11:53:57 np0005601226 systemd[1]: libpod-conmon-5c09b2b5d6bf7db9ff14d476ba698ee904f9ff91b001efa8cb6db151ad208e37.scope: Deactivated successfully.
Jan 29 11:53:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:53:58 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 29 11:53:58 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 29 11:53:58 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 29 11:53:58 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 29 11:53:58 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 29 11:53:58 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 29 11:53:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 active+remapped, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 170 B/s wr, 7 op/s; 54 B/s, 1 objects/s recovering
Jan 29 11:53:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 29 11:53:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 29 11:53:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:53:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 29 11:53:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 29 11:54:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 114 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=62/63 n=6 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=114 pruub=9.860988617s) [2] r=-1 lpr=114 pi=[62,114)/1 crt=66'486 lcod 66'486 active pruub 222.754257202s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:00 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 114 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=62/63 n=6 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=114 pruub=9.860762596s) [2] r=-1 lpr=114 pi=[62,114)/1 crt=66'486 lcod 66'486 unknown NOTIFY pruub 222.754257202s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:00 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=114) [2] r=0 lpr=114 pi=[62,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:54:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 29 11:54:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 29 11:54:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 29 11:54:01 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[62,115)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:01 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[62,115)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 115 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=62/63 n=6 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=115) [2]/[0] r=0 lpr=115 pi=[62,115)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:01 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 115 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=62/63 n=6 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=115) [2]/[0] r=0 lpr=115 pi=[62,115)/1 crt=66'486 lcod 66'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:02 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 29 11:54:02 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 29 11:54:02 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 29 11:54:02 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 29 11:54:02 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 29 11:54:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 29 11:54:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 29 11:54:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 29 11:54:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 29 11:54:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 29 11:54:02 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 116 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=115/116 n=6 ec=51/35 lis/c=62/62 les/c/f=63/63/0 sis=115) [2]/[0] async=[2] r=0 lpr=115 pi=[62,115)/1 crt=66'487 lcod 66'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 29 11:54:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 29 11:54:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 29 11:54:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 29 11:54:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 29 11:54:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 29 11:54:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 29 11:54:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 117 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=115/62 les/c/f=116/63/0 sis=117) [2] r=0 lpr=117 pi=[62,117)/1 pct=0'0 crt=66'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:03 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 117 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=115/62 les/c/f=116/63/0 sis=117) [2] r=0 lpr=117 pi=[62,117)/1 crt=66'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 117 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=115/116 n=6 ec=51/35 lis/c=115/62 les/c/f=116/63/0 sis=117 pruub=14.988609314s) [2] async=[2] r=-1 lpr=117 pi=[62,117)/1 crt=66'487 lcod 66'486 active pruub 231.079925537s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:03 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 117 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=115/116 n=6 ec=51/35 lis/c=115/62 les/c/f=116/63/0 sis=117 pruub=14.988403320s) [2] r=-1 lpr=117 pi=[62,117)/1 crt=66'487 lcod 66'486 unknown NOTIFY pruub 231.079925537s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:04 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 29 11:54:04 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 29 11:54:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 29 11:54:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 29 11:54:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 1 active+remapped, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 99 B/s, 2 objects/s recovering
Jan 29 11:54:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 29 11:54:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 29 11:54:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 29 11:54:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 29 11:54:04 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 118 pg[9.19( v 66'487 (0'0,66'487] local-lis/les=117/118 n=6 ec=51/35 lis/c=115/62 les/c/f=116/63/0 sis=117) [2] r=0 lpr=117 pi=[62,117)/1 crt=66'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:05 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 29 11:54:05 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 29 11:54:05 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 29 11:54:05 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 29 11:54:05 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 29 11:54:05 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 29 11:54:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 29 11:54:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 29 11:54:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 29 11:54:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 29 11:54:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 29 11:54:06 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 29 11:54:06 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 29 11:54:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 1 active+remapped, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 2 objects/s recovering
Jan 29 11:54:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 29 11:54:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 29 11:54:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 29 11:54:06 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 119 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=93/94 n=6 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=119 pruub=12.656876564s) [0] r=-1 lpr=119 pi=[93,119)/1 crt=66'487 active pruub 223.365905762s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:06 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 119 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=93/94 n=6 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=119 pruub=12.656805038s) [0] r=-1 lpr=119 pi=[93,119)/1 crt=66'487 unknown NOTIFY pruub 223.365905762s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:06 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=119) [0] r=0 lpr=119 pi=[93,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 29 11:54:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 29 11:54:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 29 11:54:07 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 29 11:54:07 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 29 11:54:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 29 11:54:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 29 11:54:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 29 11:54:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 29 11:54:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 29 11:54:08 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 121 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=93/94 n=6 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=121) [0]/[2] r=0 lpr=121 pi=[93,121)/1 crt=66'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:08 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 121 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=93/94 n=6 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=121) [0]/[2] r=0 lpr=121 pi=[93,121)/1 crt=66'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:08 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 121 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=121) [0]/[2] r=-1 lpr=121 pi=[93,121)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:08 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 121 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=121) [0]/[2] r=-1 lpr=121 pi=[93,121)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:08 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 29 11:54:08 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 29 11:54:08 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 29 11:54:08 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 29 11:54:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:08 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 29 11:54:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 29 11:54:09 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 29 11:54:09 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 29 11:54:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 29 11:54:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 29 11:54:09 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 122 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=121/122 n=6 ec=51/35 lis/c=93/93 les/c/f=94/94/0 sis=121) [0]/[2] async=[0] r=0 lpr=121 pi=[93,121)/1 crt=66'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:10 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 29 11:54:10 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 29 11:54:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 29 11:54:10 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 29 11:54:10 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 29 11:54:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 29 11:54:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 29 11:54:10 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 123 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=121/122 n=6 ec=51/35 lis/c=121/93 les/c/f=122/94/0 sis=123 pruub=15.020203590s) [0] async=[0] r=-1 lpr=123 pi=[93,123)/1 crt=66'487 active pruub 229.198150635s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:10 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 123 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=121/122 n=6 ec=51/35 lis/c=121/93 les/c/f=122/94/0 sis=123 pruub=15.020116806s) [0] r=-1 lpr=123 pi=[93,123)/1 crt=66'487 unknown NOTIFY pruub 229.198150635s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:10 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 123 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=121/93 les/c/f=122/94/0 sis=123) [0] r=0 lpr=123 pi=[93,123)/1 pct=0'0 crt=66'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:10 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 123 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=121/93 les/c/f=122/94/0 sis=123) [0] r=0 lpr=123 pi=[93,123)/1 crt=66'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:10 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 29 11:54:10 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 29 11:54:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:54:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:54:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:54:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:54:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:54:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:54:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 29 11:54:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 29 11:54:11 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 29 11:54:11 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 124 pg[9.1c( v 66'487 (0'0,66'487] local-lis/les=123/124 n=6 ec=51/35 lis/c=121/93 les/c/f=122/94/0 sis=123) [0] r=0 lpr=123 pi=[93,123)/1 crt=66'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:12 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 29 11:54:12 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 29 11:54:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:13 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 29 11:54:13 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 29 11:54:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 341 B/s wr, 8 op/s; 139 B/s, 2 objects/s recovering
Jan 29 11:54:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 29 11:54:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 29 11:54:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 29 11:54:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 29 11:54:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 29 11:54:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 29 11:54:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 29 11:54:15 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 29 11:54:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 29 11:54:15 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 125 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=76/77 n=6 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=125 pruub=14.685133934s) [0] r=-1 lpr=125 pi=[76,125)/1 crt=66'485 active pruub 234.133590698s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:15 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 125 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=76/77 n=6 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=125 pruub=14.685039520s) [0] r=-1 lpr=125 pi=[76,125)/1 crt=66'485 unknown NOTIFY pruub 234.133590698s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:15 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 125 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=125) [0] r=0 lpr=125 pi=[76,125)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:16 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 29 11:54:16 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 29 11:54:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 334 B/s wr, 9 op/s; 136 B/s, 2 objects/s recovering
Jan 29 11:54:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 29 11:54:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:54:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 29 11:54:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:54:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 29 11:54:16 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 29 11:54:16 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 29 11:54:17 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 126 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=126) [0]/[2] r=-1 lpr=126 pi=[76,126)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:17 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 126 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=126) [0]/[2] r=-1 lpr=126 pi=[76,126)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 29 11:54:17 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 126 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=76/77 n=6 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=126) [0]/[2] r=0 lpr=126 pi=[76,126)/1 crt=66'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:17 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 126 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=76/77 n=6 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=126) [0]/[2] r=0 lpr=126 pi=[76,126)/1 crt=66'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:17 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=83/84 n=6 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=126 pruub=10.911424637s) [1] r=-1 lpr=126 pi=[83,126)/1 crt=41'483 active pruub 231.912719727s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:17 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=83/84 n=6 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=126 pruub=10.911373138s) [1] r=-1 lpr=126 pi=[83,126)/1 crt=41'483 unknown NOTIFY pruub 231.912719727s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:17 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 126 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=126) [1] r=0 lpr=126 pi=[83,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 29 11:54:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 29 11:54:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 29 11:54:17 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 127 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=83/84 n=6 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=127) [1]/[2] r=0 lpr=127 pi=[83,127)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:17 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 127 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=83/84 n=6 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=127) [1]/[2] r=0 lpr=127 pi=[83,127)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:17 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[83,127)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:17 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[83,127)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 29 11:54:18 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 29 11:54:18 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 29 11:54:18 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 127 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=126/127 n=6 ec=51/35 lis/c=76/76 les/c/f=77/77/0 sis=126) [0]/[2] async=[0] r=0 lpr=126 pi=[76,126)/1 crt=66'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 1 remapped+peering, 1 activating+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 341 B/s wr, 9 op/s; 6/250 objects misplaced (2.400%); 139 B/s, 2 objects/s recovering
Jan 29 11:54:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 29 11:54:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 29 11:54:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 29 11:54:18 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 128 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=126/127 n=6 ec=51/35 lis/c=126/76 les/c/f=127/77/0 sis=128 pruub=15.320256233s) [0] async=[0] r=-1 lpr=128 pi=[76,128)/1 crt=66'485 active pruub 238.109619141s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:18 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 128 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=126/127 n=6 ec=51/35 lis/c=126/76 les/c/f=127/77/0 sis=128 pruub=15.320191383s) [0] r=-1 lpr=128 pi=[76,128)/1 crt=66'485 unknown NOTIFY pruub 238.109619141s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:18 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 128 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=127/128 n=6 ec=51/35 lis/c=83/83 les/c/f=84/84/0 sis=127) [1]/[2] async=[1] r=0 lpr=127 pi=[83,127)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:18 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 128 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=126/76 les/c/f=127/77/0 sis=128) [0] r=0 lpr=128 pi=[76,128)/1 pct=0'0 crt=66'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:18 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 128 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=126/76 les/c/f=127/77/0 sis=128) [0] r=0 lpr=128 pi=[76,128)/1 crt=66'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:19 np0005601226 python3.9[103038]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:54:19 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 29 11:54:19 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 29 11:54:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 29 11:54:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 29 11:54:19 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 29 11:54:19 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 129 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=127/128 n=6 ec=51/35 lis/c=127/83 les/c/f=128/84/0 sis=129 pruub=15.409172058s) [1] async=[1] r=-1 lpr=129 pi=[83,129)/1 crt=41'483 active pruub 238.805603027s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:19 np0005601226 ceph-osd[87958]: osd.2 pg_epoch: 129 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=127/128 n=6 ec=51/35 lis/c=127/83 les/c/f=128/84/0 sis=129 pruub=15.409111977s) [1] r=-1 lpr=129 pi=[83,129)/1 crt=41'483 unknown NOTIFY pruub 238.805603027s@ mbc={}] state<Start>: transitioning to Stray
Jan 29 11:54:19 np0005601226 ceph-osd[85858]: osd.0 pg_epoch: 129 pg[9.1e( v 66'485 (0'0,66'485] local-lis/les=128/129 n=6 ec=51/35 lis/c=126/76 les/c/f=127/77/0 sis=128) [0] r=0 lpr=128 pi=[76,128)/1 crt=66'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:19 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 129 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=127/83 les/c/f=128/84/0 sis=129) [1] r=0 lpr=129 pi=[83,129)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 29 11:54:19 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 129 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=127/83 les/c/f=128/84/0 sis=129) [1] r=0 lpr=129 pi=[83,129)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 29 11:54:20 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 29 11:54:20 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 29 11:54:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 1 remapped+peering, 1 activating+remapped, 303 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6/250 objects misplaced (2.400%)
Jan 29 11:54:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 29 11:54:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 29 11:54:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 29 11:54:20 np0005601226 ceph-osd[86917]: osd.1 pg_epoch: 130 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=129/130 n=6 ec=51/35 lis/c=127/83 les/c/f=128/84/0 sis=129) [1] r=0 lpr=129 pi=[83,129)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 29 11:54:20 np0005601226 python3.9[103325]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 29 11:54:21 np0005601226 python3.9[103477]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 29 11:54:22 np0005601226 python3.9[103629]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:54:22 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 29 11:54:22 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 29 11:54:22 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 29 11:54:22 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 29 11:54:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 remapped+peering, 1 activating+remapped, 303 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 6/250 objects misplaced (2.400%)
Jan 29 11:54:22 np0005601226 python3.9[103781]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 29 11:54:24 np0005601226 python3.9[103933]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:54:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Jan 29 11:54:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:24 np0005601226 python3.9[104085]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:54:25 np0005601226 python3.9[104163]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:54:25 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 29 11:54:25 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 29 11:54:26 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 29 11:54:26 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 29 11:54:26 np0005601226 python3.9[104315]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:54:26 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 29 11:54:26 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 29 11:54:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 1 objects/s recovering
Jan 29 11:54:27 np0005601226 python3.9[104469]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 29 11:54:27 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 29 11:54:27 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 29 11:54:27 np0005601226 python3.9[104622]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 29 11:54:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Jan 29 11:54:28 np0005601226 python3.9[104775]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 11:54:28 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 29 11:54:29 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 29 11:54:29 np0005601226 python3.9[104927]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 29 11:54:29 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 29 11:54:29 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 29 11:54:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:29 np0005601226 python3.9[105079]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:54:30 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 29 11:54:30 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 29 11:54:30 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 29 11:54:30 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 29 11:54:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Jan 29 11:54:30 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 29 11:54:30 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 29 11:54:31 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 29 11:54:31 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 29 11:54:31 np0005601226 python3.9[105232]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:54:32 np0005601226 python3.9[105384]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:54:32 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 29 11:54:32 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 29 11:54:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 29 11:54:32 np0005601226 python3.9[105462]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:54:33 np0005601226 python3.9[105614]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:54:33 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 29 11:54:33 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 29 11:54:33 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 29 11:54:33 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 29 11:54:33 np0005601226 python3.9[105692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:54:33 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 29 11:54:33 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 29 11:54:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Jan 29 11:54:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:34 np0005601226 python3.9[105844]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:54:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 29 11:54:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 29 11:54:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:36 np0005601226 python3.9[105995]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:54:37 np0005601226 python3.9[106147]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 29 11:54:37 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 29 11:54:37 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 29 11:54:37 np0005601226 python3.9[106297]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:54:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:38 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 29 11:54:38 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 29 11:54:38 np0005601226 python3.9[106449]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:54:38 np0005601226 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 29 11:54:39 np0005601226 systemd[1]: tuned.service: Deactivated successfully.
Jan 29 11:54:39 np0005601226 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 29 11:54:39 np0005601226 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 29 11:54:39 np0005601226 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 29 11:54:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 29 11:54:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 29 11:54:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:39 np0005601226 python3.9[106611]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 29 11:54:40 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 29 11:54:40 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 29 11:54:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:54:40
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'backups', 'default.rgw.meta', '.rgw.root']
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:54:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:54:41 np0005601226 python3.9[106763]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:54:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:42 np0005601226 python3.9[106917]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:54:42 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 29 11:54:42 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 29 11:54:43 np0005601226 systemd[1]: session-35.scope: Deactivated successfully.
Jan 29 11:54:43 np0005601226 systemd[1]: session-35.scope: Consumed 59.959s CPU time.
Jan 29 11:54:43 np0005601226 systemd-logind[823]: Session 35 logged out. Waiting for processes to exit.
Jan 29 11:54:43 np0005601226 systemd-logind[823]: Removed session 35.
Jan 29 11:54:44 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 29 11:54:44 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 29 11:54:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:45 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 29 11:54:45 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 29 11:54:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:46 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 29 11:54:46 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 29 11:54:46 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 29 11:54:46 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 29 11:54:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:49 np0005601226 systemd-logind[823]: New session 36 of user zuul.
Jan 29 11:54:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:49 np0005601226 systemd[1]: Started Session 36 of User zuul.
Jan 29 11:54:50 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 29 11:54:50 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 29 11:54:50 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 29 11:54:50 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 29 11:54:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:50 np0005601226 python3.9[107097]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:54:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 29 11:54:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:54:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:54:51 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 29 11:54:51 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 29 11:54:51 np0005601226 python3.9[107253]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 29 11:54:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:52 np0005601226 python3.9[107406]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:54:53 np0005601226 python3.9[107490]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 11:54:53 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 29 11:54:53 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 29 11:54:54 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 29 11:54:54 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 29 11:54:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:54 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 29 11:54:54 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 29 11:54:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:55 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 29 11:54:55 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 29 11:54:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:56 np0005601226 python3.9[107643]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:54:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:54:58 np0005601226 python3.9[107796]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 11:54:59 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 29 11:54:59 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 29 11:54:59 np0005601226 python3.9[107949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:54:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:54:59 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 29 11:54:59 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 29 11:55:00 np0005601226 python3.9[108101]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 29 11:55:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:00 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 29 11:55:00 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 29 11:55:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:55:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:55:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:55:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:55:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:55:01 np0005601226 python3.9[108332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:55:01 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 29 11:55:01 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 29 11:55:01 np0005601226 podman[108400]: 2026-01-29 16:55:01.375260887 +0000 UTC m=+0.017790659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:55:01 np0005601226 podman[108400]: 2026-01-29 16:55:01.580355465 +0000 UTC m=+0.222885217 container create 5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:55:01 np0005601226 systemd[76621]: Created slice User Background Tasks Slice.
Jan 29 11:55:01 np0005601226 systemd[76621]: Starting Cleanup of User's Temporary Files and Directories...
Jan 29 11:55:01 np0005601226 systemd[76621]: Finished Cleanup of User's Temporary Files and Directories.
Jan 29 11:55:01 np0005601226 systemd[1]: Started libpod-conmon-5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f.scope.
Jan 29 11:55:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:55:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:55:01 np0005601226 podman[108400]: 2026-01-29 16:55:01.873824804 +0000 UTC m=+0.516354596 container init 5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mahavira, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:55:01 np0005601226 podman[108400]: 2026-01-29 16:55:01.883485901 +0000 UTC m=+0.526015673 container start 5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:55:01 np0005601226 wonderful_mahavira[108485]: 167 167
Jan 29 11:55:01 np0005601226 systemd[1]: libpod-5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f.scope: Deactivated successfully.
Jan 29 11:55:01 np0005601226 podman[108400]: 2026-01-29 16:55:01.985219916 +0000 UTC m=+0.627749698 container attach 5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:55:01 np0005601226 podman[108400]: 2026-01-29 16:55:01.986226266 +0000 UTC m=+0.628756018 container died 5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 11:55:02 np0005601226 python3.9[108584]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:55:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-bdd5054fb5e0977ee098e6ddb8471a557b2a6b03e9408ebb1acf51ed6921c714-merged.mount: Deactivated successfully.
Jan 29 11:55:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:02 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 29 11:55:02 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 29 11:55:02 np0005601226 podman[108400]: 2026-01-29 16:55:02.759072404 +0000 UTC m=+1.401602196 container remove 5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:55:02 np0005601226 systemd[1]: libpod-conmon-5a0b65321a55db1c31b397c839cff12611cfe434d23c06b19867ab623443e74f.scope: Deactivated successfully.
Jan 29 11:55:03 np0005601226 podman[108594]: 2026-01-29 16:55:02.916097838 +0000 UTC m=+0.034945987 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:55:03 np0005601226 podman[108594]: 2026-01-29 16:55:03.175363992 +0000 UTC m=+0.294212091 container create 859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_elbakyan, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:55:03 np0005601226 systemd[1]: Started libpod-conmon-859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c.scope.
Jan 29 11:55:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:55:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ae58b7840fc6130b8d67aabc9b280a6c7be0c08617e3d16fcd7cfb118d05f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ae58b7840fc6130b8d67aabc9b280a6c7be0c08617e3d16fcd7cfb118d05f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ae58b7840fc6130b8d67aabc9b280a6c7be0c08617e3d16fcd7cfb118d05f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ae58b7840fc6130b8d67aabc9b280a6c7be0c08617e3d16fcd7cfb118d05f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/417ae58b7840fc6130b8d67aabc9b280a6c7be0c08617e3d16fcd7cfb118d05f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:03 np0005601226 podman[108594]: 2026-01-29 16:55:03.722704406 +0000 UTC m=+0.841552485 container init 859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:55:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 29 11:55:03 np0005601226 podman[108594]: 2026-01-29 16:55:03.728721435 +0000 UTC m=+0.847569494 container start 859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 11:55:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 29 11:55:03 np0005601226 podman[108594]: 2026-01-29 16:55:03.805945444 +0000 UTC m=+0.924793553 container attach 859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:55:04 np0005601226 nifty_elbakyan[108611]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:55:04 np0005601226 nifty_elbakyan[108611]: --> All data devices are unavailable
Jan 29 11:55:04 np0005601226 systemd[1]: libpod-859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c.scope: Deactivated successfully.
Jan 29 11:55:04 np0005601226 podman[108594]: 2026-01-29 16:55:04.156618148 +0000 UTC m=+1.275466207 container died 859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:55:04 np0005601226 python3.9[108782]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:55:04 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 29 11:55:04 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 29 11:55:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-417ae58b7840fc6130b8d67aabc9b280a6c7be0c08617e3d16fcd7cfb118d05f-merged.mount: Deactivated successfully.
Jan 29 11:55:04 np0005601226 podman[108594]: 2026-01-29 16:55:04.733054414 +0000 UTC m=+1.851902493 container remove 859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 11:55:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:04 np0005601226 systemd[1]: libpod-conmon-859fd7be9907b857efbb01ffce0308fd5eb04ff67c442ab03b34ef903d03655c.scope: Deactivated successfully.
Jan 29 11:55:04 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 29 11:55:04 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 29 11:55:05 np0005601226 podman[109014]: 2026-01-29 16:55:05.163135861 +0000 UTC m=+0.087114552 container create bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kepler, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:55:05 np0005601226 podman[109014]: 2026-01-29 16:55:05.094512918 +0000 UTC m=+0.018491639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:55:05 np0005601226 systemd[1]: Started libpod-conmon-bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c.scope.
Jan 29 11:55:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:55:05 np0005601226 podman[109014]: 2026-01-29 16:55:05.413653177 +0000 UTC m=+0.337631898 container init bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kepler, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:55:05 np0005601226 podman[109014]: 2026-01-29 16:55:05.421138139 +0000 UTC m=+0.345116830 container start bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kepler, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:55:05 np0005601226 romantic_kepler[109083]: 167 167
Jan 29 11:55:05 np0005601226 systemd[1]: libpod-bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c.scope: Deactivated successfully.
Jan 29 11:55:05 np0005601226 podman[109014]: 2026-01-29 16:55:05.575525396 +0000 UTC m=+0.499504097 container attach bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:55:05 np0005601226 podman[109014]: 2026-01-29 16:55:05.57601194 +0000 UTC m=+0.499990641 container died bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 29 11:55:05 np0005601226 python3.9[109175]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 29 11:55:05 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d042d7d0ed28d34a94e582a1c1f2a2e1dd6edd4469714e0401cd350d140725aa-merged.mount: Deactivated successfully.
Jan 29 11:55:06 np0005601226 podman[109014]: 2026-01-29 16:55:06.355708751 +0000 UTC m=+1.279687482 container remove bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kepler, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:55:06 np0005601226 systemd[1]: libpod-conmon-bc336cacf606140b311c846f3505ba9c30a93518888ce6a68202737c29ea5c9c.scope: Deactivated successfully.
Jan 29 11:55:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:06 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 29 11:55:06 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 29 11:55:06 np0005601226 podman[109334]: 2026-01-29 16:55:06.513870719 +0000 UTC m=+0.024014153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:55:06 np0005601226 python3.9[109326]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:55:06 np0005601226 podman[109334]: 2026-01-29 16:55:06.62761757 +0000 UTC m=+0.137760994 container create d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 11:55:07 np0005601226 systemd[1]: Started libpod-conmon-d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8.scope.
Jan 29 11:55:07 np0005601226 python3.9[109499]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:55:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:55:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8526bc4d5de7435fdb85b54914fe6c9709570c10ec8e61ac2a46739fbf45889/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8526bc4d5de7435fdb85b54914fe6c9709570c10ec8e61ac2a46739fbf45889/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8526bc4d5de7435fdb85b54914fe6c9709570c10ec8e61ac2a46739fbf45889/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8526bc4d5de7435fdb85b54914fe6c9709570c10ec8e61ac2a46739fbf45889/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:07 np0005601226 podman[109334]: 2026-01-29 16:55:07.861454972 +0000 UTC m=+1.371598466 container init d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_greider, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:55:07 np0005601226 podman[109334]: 2026-01-29 16:55:07.868026747 +0000 UTC m=+1.378170161 container start d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_greider, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:55:08 np0005601226 podman[109334]: 2026-01-29 16:55:08.046559138 +0000 UTC m=+1.556702572 container attach d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_greider, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 11:55:08 np0005601226 blissful_greider[109502]: {
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:    "0": [
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:        {
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "devices": [
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "/dev/loop3"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            ],
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_name": "ceph_lv0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_size": "21470642176",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "name": "ceph_lv0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "tags": {
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cluster_name": "ceph",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.crush_device_class": "",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.encrypted": "0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.objectstore": "bluestore",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osd_id": "0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.type": "block",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.vdo": "0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.with_tpm": "0"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            },
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "type": "block",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "vg_name": "ceph_vg0"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:        }
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:    ],
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:    "1": [
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:        {
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "devices": [
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "/dev/loop4"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            ],
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_name": "ceph_lv1",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_size": "21470642176",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "name": "ceph_lv1",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "tags": {
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cluster_name": "ceph",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.crush_device_class": "",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.encrypted": "0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.objectstore": "bluestore",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osd_id": "1",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.type": "block",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.vdo": "0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.with_tpm": "0"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            },
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "type": "block",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "vg_name": "ceph_vg1"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:        }
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:    ],
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:    "2": [
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:        {
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "devices": [
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "/dev/loop5"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            ],
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_name": "ceph_lv2",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_size": "21470642176",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "name": "ceph_lv2",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "tags": {
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.cluster_name": "ceph",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.crush_device_class": "",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.encrypted": "0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.objectstore": "bluestore",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osd_id": "2",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.type": "block",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.vdo": "0",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:                "ceph.with_tpm": "0"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            },
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "type": "block",
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:            "vg_name": "ceph_vg2"
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:        }
Jan 29 11:55:08 np0005601226 blissful_greider[109502]:    ]
Jan 29 11:55:08 np0005601226 blissful_greider[109502]: }
Jan 29 11:55:08 np0005601226 systemd[1]: libpod-d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8.scope: Deactivated successfully.
Jan 29 11:55:08 np0005601226 podman[109334]: 2026-01-29 16:55:08.160885127 +0000 UTC m=+1.671028551 container died d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_greider, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 11:55:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f8526bc4d5de7435fdb85b54914fe6c9709570c10ec8e61ac2a46739fbf45889-merged.mount: Deactivated successfully.
Jan 29 11:55:08 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 29 11:55:08 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 29 11:55:08 np0005601226 podman[109334]: 2026-01-29 16:55:08.787856211 +0000 UTC m=+2.297999625 container remove d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:55:08 np0005601226 systemd[1]: libpod-conmon-d197a34ab85b4eb79b199d46b18510e89c791a607d9cc2fcc62b42c3b55085b8.scope: Deactivated successfully.
Jan 29 11:55:09 np0005601226 podman[109587]: 2026-01-29 16:55:09.150391767 +0000 UTC m=+0.022296362 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:55:09 np0005601226 podman[109587]: 2026-01-29 16:55:09.333886136 +0000 UTC m=+0.205790681 container create 0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 11:55:09 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 29 11:55:09 np0005601226 systemd[1]: Started libpod-conmon-0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15.scope.
Jan 29 11:55:09 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 29 11:55:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:55:09 np0005601226 podman[109587]: 2026-01-29 16:55:09.473662109 +0000 UTC m=+0.345566634 container init 0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:55:09 np0005601226 podman[109587]: 2026-01-29 16:55:09.479068909 +0000 UTC m=+0.350973444 container start 0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:55:09 np0005601226 amazing_euler[109670]: 167 167
Jan 29 11:55:09 np0005601226 systemd[1]: libpod-0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15.scope: Deactivated successfully.
Jan 29 11:55:09 np0005601226 podman[109587]: 2026-01-29 16:55:09.530129212 +0000 UTC m=+0.402033807 container attach 0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 11:55:09 np0005601226 podman[109587]: 2026-01-29 16:55:09.530670958 +0000 UTC m=+0.402575493 container died 0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 11:55:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2295d7077ad503873404258e0191574cc96f41ce4de43d74893f2a36cd9c8996-merged.mount: Deactivated successfully.
Jan 29 11:55:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:09 np0005601226 podman[109587]: 2026-01-29 16:55:09.823280091 +0000 UTC m=+0.695184636 container remove 0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_euler, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:55:09 np0005601226 systemd[1]: libpod-conmon-0ebc3e0d841e5daadb7f05028b4db73dd43c8f814024398fad71a2681600de15.scope: Deactivated successfully.
Jan 29 11:55:09 np0005601226 python3.9[109770]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:55:09 np0005601226 podman[109779]: 2026-01-29 16:55:09.953985346 +0000 UTC m=+0.050525259 container create 1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 11:55:10 np0005601226 podman[109779]: 2026-01-29 16:55:09.921270816 +0000 UTC m=+0.017810709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:55:10 np0005601226 systemd[1]: Started libpod-conmon-1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6.scope.
Jan 29 11:55:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:55:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e47cbc8dbb4fd0273a7cb05a7165ff27fade392fb48e76881c5638ac623ee34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e47cbc8dbb4fd0273a7cb05a7165ff27fade392fb48e76881c5638ac623ee34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e47cbc8dbb4fd0273a7cb05a7165ff27fade392fb48e76881c5638ac623ee34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e47cbc8dbb4fd0273a7cb05a7165ff27fade392fb48e76881c5638ac623ee34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:55:10 np0005601226 podman[109779]: 2026-01-29 16:55:10.135344871 +0000 UTC m=+0.231884744 container init 1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:55:10 np0005601226 podman[109779]: 2026-01-29 16:55:10.142108262 +0000 UTC m=+0.238648135 container start 1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:55:10 np0005601226 podman[109779]: 2026-01-29 16:55:10.216062764 +0000 UTC m=+0.312602657 container attach 1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:55:10 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 29 11:55:10 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 29 11:55:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:55:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:55:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:55:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:55:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:55:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:55:10 np0005601226 lvm[109873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:55:10 np0005601226 lvm[109873]: VG ceph_vg0 finished
Jan 29 11:55:10 np0005601226 lvm[109876]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:55:10 np0005601226 lvm[109876]: VG ceph_vg1 finished
Jan 29 11:55:10 np0005601226 lvm[109878]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:55:10 np0005601226 lvm[109878]: VG ceph_vg2 finished
Jan 29 11:55:10 np0005601226 cranky_ritchie[109795]: {}
Jan 29 11:55:10 np0005601226 systemd[1]: libpod-1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6.scope: Deactivated successfully.
Jan 29 11:55:10 np0005601226 podman[109779]: 2026-01-29 16:55:10.842053789 +0000 UTC m=+0.938593662 container died 1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:55:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9e47cbc8dbb4fd0273a7cb05a7165ff27fade392fb48e76881c5638ac623ee34-merged.mount: Deactivated successfully.
Jan 29 11:55:11 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 29 11:55:11 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 29 11:55:11 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 29 11:55:11 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 29 11:55:11 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 29 11:55:11 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 29 11:55:12 np0005601226 python3.9[110044]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:55:12 np0005601226 podman[109779]: 2026-01-29 16:55:12.067916045 +0000 UTC m=+2.164455928 container remove 1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_ritchie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:55:12 np0005601226 systemd[1]: libpod-conmon-1e2d6701693e6abc4faafa53be8e9331289bf89a1bb0d97f629760c53085e4a6.scope: Deactivated successfully.
Jan 29 11:55:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:55:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:55:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:55:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:12 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 29 11:55:12 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 29 11:55:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:55:12 np0005601226 python3.9[110198]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 29 11:55:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:55:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:55:13 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 29 11:55:13 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 29 11:55:13 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 29 11:55:13 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 29 11:55:13 np0005601226 systemd[1]: session-36.scope: Deactivated successfully.
Jan 29 11:55:13 np0005601226 systemd[1]: session-36.scope: Consumed 16.093s CPU time.
Jan 29 11:55:13 np0005601226 systemd-logind[823]: Session 36 logged out. Waiting for processes to exit.
Jan 29 11:55:13 np0005601226 systemd-logind[823]: Removed session 36.
Jan 29 11:55:14 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 29 11:55:14 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 29 11:55:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 29 11:55:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 29 11:55:15 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 29 11:55:15 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 29 11:55:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 29 11:55:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 29 11:55:16 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 29 11:55:16 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 29 11:55:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:16 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 29 11:55:16 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 29 11:55:17 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 29 11:55:17 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 29 11:55:18 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 29 11:55:18 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 29 11:55:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:18 np0005601226 systemd-logind[823]: New session 37 of user zuul.
Jan 29 11:55:18 np0005601226 systemd[1]: Started Session 37 of User zuul.
Jan 29 11:55:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:19 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 29 11:55:19 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 29 11:55:19 np0005601226 python3.9[110401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:55:20 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 29 11:55:20 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 29 11:55:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:20 np0005601226 python3.9[110555]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:55:21 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 29 11:55:21 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 29 11:55:21 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 29 11:55:21 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 29 11:55:21 np0005601226 python3.9[110748]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:55:22 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 29 11:55:22 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 29 11:55:22 np0005601226 systemd[1]: session-37.scope: Deactivated successfully.
Jan 29 11:55:22 np0005601226 systemd[1]: session-37.scope: Consumed 2.021s CPU time.
Jan 29 11:55:22 np0005601226 systemd-logind[823]: Session 37 logged out. Waiting for processes to exit.
Jan 29 11:55:22 np0005601226 systemd-logind[823]: Removed session 37.
Jan 29 11:55:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:22 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 29 11:55:22 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 29 11:55:23 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 29 11:55:23 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 29 11:55:24 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 29 11:55:24 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 29 11:55:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:26 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 29 11:55:26 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 29 11:55:28 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 29 11:55:28 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 29 11:55:28 np0005601226 systemd-logind[823]: New session 38 of user zuul.
Jan 29 11:55:28 np0005601226 systemd[1]: Started Session 38 of User zuul.
Jan 29 11:55:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:28 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 29 11:55:28 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 29 11:55:29 np0005601226 python3.9[110928]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:55:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 29 11:55:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 29 11:55:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:30 np0005601226 python3.9[111084]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:55:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:31 np0005601226 python3.9[111240]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:55:31 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 29 11:55:31 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 29 11:55:32 np0005601226 python3.9[111324]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:55:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:33 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 29 11:55:33 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 29 11:55:34 np0005601226 python3.9[111477]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:55:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:34 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 29 11:55:34 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 29 11:55:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:35 np0005601226 python3.9[111672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:55:35 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 29 11:55:35 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 29 11:55:36 np0005601226 python3.9[111824]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:55:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:37 np0005601226 python3.9[111989]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:55:37 np0005601226 python3.9[112067]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:55:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:38 np0005601226 python3.9[112219]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:55:39 np0005601226 python3.9[112297]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:55:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 29 11:55:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 29 11:55:39 np0005601226 python3.9[112449]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:55:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:40 np0005601226 python3.9[112601]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:55:40
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'images', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:55:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:55:40 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 29 11:55:40 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 29 11:55:40 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 29 11:55:40 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 29 11:55:41 np0005601226 python3.9[112753]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:55:41 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 29 11:55:41 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 29 11:55:41 np0005601226 python3.9[112905]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:55:41 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 29 11:55:41 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 29 11:55:42 np0005601226 python3.9[113057]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:55:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:42 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 29 11:55:42 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 29 11:55:43 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 29 11:55:43 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 29 11:55:43 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 29 11:55:43 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 29 11:55:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:45 np0005601226 python3.9[113210]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:55:45 np0005601226 python3.9[113364]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:55:45 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 29 11:55:45 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 29 11:55:46 np0005601226 python3.9[113516]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:55:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:46 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 29 11:55:46 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 29 11:55:47 np0005601226 python3.9[113668]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:55:48 np0005601226 python3.9[113821]: ansible-service_facts Invoked
Jan 29 11:55:48 np0005601226 network[113838]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 11:55:48 np0005601226 network[113839]: 'network-scripts' will be removed from distribution in near future.
Jan 29 11:55:48 np0005601226 network[113840]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 11:55:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:48 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 29 11:55:48 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 29 11:55:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:50 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 29 11:55:50 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 29 11:55:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:51 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 29 11:55:51 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:55:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:55:51 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 29 11:55:51 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 29 11:55:52 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 29 11:55:52 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 29 11:55:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:52 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 29 11:55:52 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 29 11:55:52 np0005601226 python3.9[114292]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:55:52 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 29 11:55:53 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 29 11:55:53 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 29 11:55:54 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 29 11:55:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:55:55 np0005601226 python3.9[114445]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 29 11:55:55 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 29 11:55:55 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 29 11:55:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:56 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 29 11:55:56 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 29 11:55:56 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 29 11:55:57 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 29 11:55:57 np0005601226 python3.9[114597]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:55:57 np0005601226 python3.9[114675]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:55:57 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 29 11:55:57 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 29 11:55:58 np0005601226 python3.9[114827]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:55:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:55:58 np0005601226 python3.9[114905]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:55:58 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 29 11:55:58 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 29 11:55:58 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 29 11:55:58 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 29 11:55:59 np0005601226 python3.9[115057]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:00 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 29 11:56:00 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 29 11:56:00 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 29 11:56:00 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 29 11:56:00 np0005601226 python3.9[115209]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:56:02 np0005601226 python3.9[115293]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:56:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:02 np0005601226 systemd[1]: session-38.scope: Deactivated successfully.
Jan 29 11:56:02 np0005601226 systemd[1]: session-38.scope: Consumed 20.653s CPU time.
Jan 29 11:56:02 np0005601226 systemd-logind[823]: Session 38 logged out. Waiting for processes to exit.
Jan 29 11:56:02 np0005601226 systemd-logind[823]: Removed session 38.
Jan 29 11:56:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:04 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 29 11:56:04 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 29 11:56:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:07 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 29 11:56:07 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 29 11:56:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:08 np0005601226 systemd-logind[823]: New session 39 of user zuul.
Jan 29 11:56:08 np0005601226 systemd[1]: Started Session 39 of User zuul.
Jan 29 11:56:08 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 29 11:56:08 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 29 11:56:09 np0005601226 python3.9[115476]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:10 np0005601226 python3.9[115628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:56:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:56:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:56:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:56:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:56:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:56:10 np0005601226 python3.9[115706]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:10 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 29 11:56:10 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 29 11:56:11 np0005601226 systemd[1]: session-39.scope: Deactivated successfully.
Jan 29 11:56:11 np0005601226 systemd[1]: session-39.scope: Consumed 1.427s CPU time.
Jan 29 11:56:11 np0005601226 systemd-logind[823]: Session 39 logged out. Waiting for processes to exit.
Jan 29 11:56:11 np0005601226 systemd-logind[823]: Removed session 39.
Jan 29 11:56:11 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 29 11:56:11 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 29 11:56:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:56:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:56:13 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 29 11:56:13 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 29 11:56:13 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 29 11:56:13 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 29 11:56:14 np0005601226 podman[115874]: 2026-01-29 16:56:14.040517661 +0000 UTC m=+0.037238742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:56:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:14 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 29 11:56:14 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 29 11:56:14 np0005601226 podman[115874]: 2026-01-29 16:56:14.884877963 +0000 UTC m=+0.881598954 container create 4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:56:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 29 11:56:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 29 11:56:15 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:56:15 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:56:15 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:56:15 np0005601226 systemd[1]: Started libpod-conmon-4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733.scope.
Jan 29 11:56:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:56:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:15 np0005601226 podman[115874]: 2026-01-29 16:56:15.699434954 +0000 UTC m=+1.696155985 container init 4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 11:56:15 np0005601226 podman[115874]: 2026-01-29 16:56:15.708597069 +0000 UTC m=+1.705318090 container start 4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:56:15 np0005601226 sweet_burnell[115890]: 167 167
Jan 29 11:56:15 np0005601226 systemd[1]: libpod-4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733.scope: Deactivated successfully.
Jan 29 11:56:15 np0005601226 conmon[115890]: conmon 4e0c3cb777af5694e1e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733.scope/container/memory.events
Jan 29 11:56:15 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 29 11:56:15 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 29 11:56:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 29 11:56:15 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 29 11:56:16 np0005601226 podman[115874]: 2026-01-29 16:56:16.042076272 +0000 UTC m=+2.038797283 container attach 4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 11:56:16 np0005601226 podman[115874]: 2026-01-29 16:56:16.043256993 +0000 UTC m=+2.039977994 container died 4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:56:16 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 29 11:56:16 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 29 11:56:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:16 np0005601226 systemd-logind[823]: New session 40 of user zuul.
Jan 29 11:56:16 np0005601226 systemd[1]: Started Session 40 of User zuul.
Jan 29 11:56:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a8a5a8b87e4149a0ca33ce415b502a0836a02c9111017ced31bdb8cb66be8b2b-merged.mount: Deactivated successfully.
Jan 29 11:56:18 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 29 11:56:18 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 29 11:56:18 np0005601226 python3.9[116062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:56:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:18 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 29 11:56:18 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 29 11:56:19 np0005601226 python3.9[116218]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:19 np0005601226 podman[115874]: 2026-01-29 16:56:19.849128935 +0000 UTC m=+5.845849936 container remove 4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:56:19 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 29 11:56:19 np0005601226 systemd[1]: libpod-conmon-4e0c3cb777af5694e1e7950e8bdf90d5b46744336e75edc02b8e19d6532c7733.scope: Deactivated successfully.
Jan 29 11:56:19 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 29 11:56:20 np0005601226 podman[116325]: 2026-01-29 16:56:19.977761071 +0000 UTC m=+0.026045231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:56:20 np0005601226 python3.9[116414]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:20 np0005601226 podman[116325]: 2026-01-29 16:56:20.64599099 +0000 UTC m=+0.694275120 container create 71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:56:20 np0005601226 python3.9[116492]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.ntvp8gj2 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:20 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 29 11:56:20 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 29 11:56:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:21 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 29 11:56:21 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 29 11:56:21 np0005601226 systemd[1]: Started libpod-conmon-71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746.scope.
Jan 29 11:56:21 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:56:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6c1ada78d7a9e3b210736cdd918ad96e2f0829671259c8f8e36a1d3f9e82db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6c1ada78d7a9e3b210736cdd918ad96e2f0829671259c8f8e36a1d3f9e82db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6c1ada78d7a9e3b210736cdd918ad96e2f0829671259c8f8e36a1d3f9e82db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6c1ada78d7a9e3b210736cdd918ad96e2f0829671259c8f8e36a1d3f9e82db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6c1ada78d7a9e3b210736cdd918ad96e2f0829671259c8f8e36a1d3f9e82db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:21 np0005601226 podman[116325]: 2026-01-29 16:56:21.552896512 +0000 UTC m=+1.601180692 container init 71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_borg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:56:21 np0005601226 podman[116325]: 2026-01-29 16:56:21.560318982 +0000 UTC m=+1.608603112 container start 71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_borg, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:56:21 np0005601226 python3.9[116649]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:21 np0005601226 podman[116325]: 2026-01-29 16:56:21.903007972 +0000 UTC m=+1.951292102 container attach 71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 29 11:56:21 np0005601226 angry_borg[116549]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:56:21 np0005601226 angry_borg[116549]: --> All data devices are unavailable
Jan 29 11:56:21 np0005601226 systemd[1]: libpod-71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746.scope: Deactivated successfully.
Jan 29 11:56:21 np0005601226 podman[116325]: 2026-01-29 16:56:21.986403682 +0000 UTC m=+2.034687802 container died 71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:56:22 np0005601226 python3.9[116731]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.zkoini6h recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:22 np0005601226 python3.9[116907]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:56:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cd6c1ada78d7a9e3b210736cdd918ad96e2f0829671259c8f8e36a1d3f9e82db-merged.mount: Deactivated successfully.
Jan 29 11:56:23 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 29 11:56:23 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 29 11:56:23 np0005601226 python3.9[117062]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:23 np0005601226 python3.9[117140]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:56:24 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 29 11:56:24 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 29 11:56:24 np0005601226 podman[116325]: 2026-01-29 16:56:24.433510479 +0000 UTC m=+4.481794599 container remove 71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_borg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:56:24 np0005601226 systemd[1]: libpod-conmon-71fbb02648f4a95bee743a71efa6966f8f5782ef3d62ff49d3d61d8089473746.scope: Deactivated successfully.
Jan 29 11:56:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:24 np0005601226 python3.9[117292]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:24 np0005601226 python3.9[117420]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:56:24 np0005601226 podman[117434]: 2026-01-29 16:56:24.781079118 +0000 UTC m=+0.019797283 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:56:25 np0005601226 podman[117434]: 2026-01-29 16:56:25.193654846 +0000 UTC m=+0.432373001 container create 2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elbakyan, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:56:25 np0005601226 systemd[1]: Started libpod-conmon-2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c.scope.
Jan 29 11:56:25 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:56:25 np0005601226 python3.9[117599]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:25 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 29 11:56:25 np0005601226 podman[117434]: 2026-01-29 16:56:25.941172106 +0000 UTC m=+1.179890291 container init 2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elbakyan, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:56:25 np0005601226 podman[117434]: 2026-01-29 16:56:25.947276359 +0000 UTC m=+1.185994514 container start 2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 11:56:25 np0005601226 infallible_elbakyan[117602]: 167 167
Jan 29 11:56:25 np0005601226 systemd[1]: libpod-2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c.scope: Deactivated successfully.
Jan 29 11:56:25 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 29 11:56:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:26 np0005601226 python3.9[117769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:26 np0005601226 podman[117434]: 2026-01-29 16:56:26.235833164 +0000 UTC m=+1.474551369 container attach 2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elbakyan, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 11:56:26 np0005601226 podman[117434]: 2026-01-29 16:56:26.236934384 +0000 UTC m=+1.475652539 container died 2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elbakyan, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:56:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:26 np0005601226 python3.9[117847]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:26 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fc132c8ac9a9243e6820ba367cbbad4b5fd94b491796f3423e6e206ec6aa1a8a-merged.mount: Deactivated successfully.
Jan 29 11:56:26 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 29 11:56:26 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 29 11:56:27 np0005601226 python3.9[118000]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:27 np0005601226 python3.9[118078]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:28 np0005601226 podman[117434]: 2026-01-29 16:56:28.139709 +0000 UTC m=+3.378427165 container remove 2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elbakyan, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:56:28 np0005601226 systemd[1]: libpod-conmon-2d011ec81048c0429f3f799fed86912d98d907974bc3a3df31ede717dd6ca06c.scope: Deactivated successfully.
Jan 29 11:56:28 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 29 11:56:28 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:28.249314) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 11:56:28 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 29 11:56:28 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705788249403, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7357, "num_deletes": 252, "total_data_size": 10728568, "memory_usage": 11007552, "flush_reason": "Manual Compaction"}
Jan 29 11:56:28 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 29 11:56:28 np0005601226 podman[118238]: 2026-01-29 16:56:28.236878191 +0000 UTC m=+0.020392618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:56:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:28 np0005601226 python3.9[118233]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:56:28 np0005601226 systemd[1]: Reloading.
Jan 29 11:56:28 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:56:28 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:56:29 np0005601226 podman[118238]: 2026-01-29 16:56:29.174939051 +0000 UTC m=+0.958453498 container create 712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705789299308, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8482045, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 153, "largest_seqno": 7507, "table_properties": {"data_size": 8453801, "index_size": 18785, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 77871, "raw_average_key_size": 23, "raw_value_size": 8388588, "raw_average_value_size": 2510, "num_data_blocks": 823, "num_entries": 3342, "num_filter_entries": 3342, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705354, "oldest_key_time": 1769705354, "file_creation_time": 1769705788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 1050046 microseconds, and 13857 cpu microseconds.
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:29.299367) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8482045 bytes OK
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:29.299388) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:29.645370) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:29.645411) EVENT_LOG_v1 {"time_micros": 1769705789645404, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:29.645444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10696269, prev total WAL file size 10698727, number of live WAL files 2.
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:29.646974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(8283KB) 13(59KB) 8(1944B)]
Jan 29 11:56:29 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705789647069, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8545243, "oldest_snapshot_seqno": -1}
Jan 29 11:56:29 np0005601226 systemd[1]: Started libpod-conmon-712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985.scope.
Jan 29 11:56:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:56:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f123af068badb2eae6e25abb35817e328d73e6c519f2e7ed31ab8462d53f7d2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f123af068badb2eae6e25abb35817e328d73e6c519f2e7ed31ab8462d53f7d2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f123af068badb2eae6e25abb35817e328d73e6c519f2e7ed31ab8462d53f7d2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f123af068badb2eae6e25abb35817e328d73e6c519f2e7ed31ab8462d53f7d2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 29 11:56:29 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 29 11:56:29 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 29 11:56:29 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 29 11:56:29 np0005601226 python3.9[118443]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:30 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 29 11:56:30 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 29 11:56:30 np0005601226 podman[118238]: 2026-01-29 16:56:30.140733737 +0000 UTC m=+1.924248204 container init 712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:56:30 np0005601226 podman[118238]: 2026-01-29 16:56:30.14866007 +0000 UTC m=+1.932174477 container start 712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shockley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]: {
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:    "0": [
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:        {
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "devices": [
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "/dev/loop3"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            ],
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_name": "ceph_lv0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_size": "21470642176",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "name": "ceph_lv0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "tags": {
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cluster_name": "ceph",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.crush_device_class": "",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.encrypted": "0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.objectstore": "bluestore",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osd_id": "0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.type": "block",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.vdo": "0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.with_tpm": "0"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            },
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "type": "block",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "vg_name": "ceph_vg0"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:        }
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:    ],
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:    "1": [
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:        {
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "devices": [
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "/dev/loop4"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            ],
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_name": "ceph_lv1",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_size": "21470642176",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "name": "ceph_lv1",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "tags": {
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cluster_name": "ceph",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.crush_device_class": "",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.encrypted": "0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.objectstore": "bluestore",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osd_id": "1",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.type": "block",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.vdo": "0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.with_tpm": "0"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            },
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "type": "block",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "vg_name": "ceph_vg1"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:        }
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:    ],
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:    "2": [
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:        {
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "devices": [
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "/dev/loop5"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            ],
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_name": "ceph_lv2",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_size": "21470642176",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "name": "ceph_lv2",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "tags": {
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.cluster_name": "ceph",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.crush_device_class": "",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.encrypted": "0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.objectstore": "bluestore",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osd_id": "2",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.type": "block",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.vdo": "0",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:                "ceph.with_tpm": "0"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            },
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "type": "block",
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:            "vg_name": "ceph_vg2"
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:        }
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]:    ]
Jan 29 11:56:30 np0005601226 gifted_shockley[118444]: }
Jan 29 11:56:30 np0005601226 systemd[1]: libpod-712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985.scope: Deactivated successfully.
Jan 29 11:56:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3169 keys, 8497356 bytes, temperature: kUnknown
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705790583670, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8497356, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8469510, "index_size": 18840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 76315, "raw_average_key_size": 24, "raw_value_size": 8405637, "raw_average_value_size": 2652, "num_data_blocks": 827, "num_entries": 3169, "num_filter_entries": 3169, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769705789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 11:56:30 np0005601226 python3.9[118526]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:30.584008) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8497356 bytes
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:30.846941) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 9.1 rd, 9.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(8.1, 0.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3460, records dropped: 291 output_compression: NoCompression
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:30.847008) EVENT_LOG_v1 {"time_micros": 1769705790846979, "job": 4, "event": "compaction_finished", "compaction_time_micros": 936789, "compaction_time_cpu_micros": 15230, "output_level": 6, "num_output_files": 1, "total_output_size": 8497356, "num_input_records": 3460, "num_output_records": 3169, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705790848912, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705790849015, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705790849158, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 29 11:56:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:56:29.646862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 11:56:30 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 29 11:56:30 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 29 11:56:30 np0005601226 podman[118238]: 2026-01-29 16:56:30.998405047 +0000 UTC m=+2.781919504 container attach 712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shockley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:56:30 np0005601226 podman[118238]: 2026-01-29 16:56:30.999348861 +0000 UTC m=+2.782863288 container died 712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 29 11:56:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:31 np0005601226 python3.9[118694]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f123af068badb2eae6e25abb35817e328d73e6c519f2e7ed31ab8462d53f7d2c-merged.mount: Deactivated successfully.
Jan 29 11:56:31 np0005601226 python3.9[118775]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:31 np0005601226 podman[118238]: 2026-01-29 16:56:31.894666153 +0000 UTC m=+3.678180560 container remove 712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:56:31 np0005601226 systemd[1]: libpod-conmon-712d136f971fa811016c822b7b009b313a6799d94c7b897d46f44dc8da18d985.scope: Deactivated successfully.
Jan 29 11:56:32 np0005601226 podman[118990]: 2026-01-29 16:56:32.29681734 +0000 UTC m=+0.024356885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:56:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:32 np0005601226 python3.9[118977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 11:56:32 np0005601226 systemd[1]: Reloading.
Jan 29 11:56:32 np0005601226 podman[118990]: 2026-01-29 16:56:32.52079379 +0000 UTC m=+0.248333305 container create a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:56:32 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 11:56:32 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 11:56:32 np0005601226 systemd[1]: Starting Create netns directory...
Jan 29 11:56:32 np0005601226 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 29 11:56:32 np0005601226 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 29 11:56:32 np0005601226 systemd[1]: Finished Create netns directory.
Jan 29 11:56:32 np0005601226 systemd[1]: Started libpod-conmon-a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc.scope.
Jan 29 11:56:32 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:56:32 np0005601226 podman[118990]: 2026-01-29 16:56:32.998126638 +0000 UTC m=+0.725666173 container init a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:56:33 np0005601226 podman[118990]: 2026-01-29 16:56:33.00304405 +0000 UTC m=+0.730583555 container start a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 11:56:33 np0005601226 crazy_pascal[119056]: 167 167
Jan 29 11:56:33 np0005601226 systemd[1]: libpod-a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc.scope: Deactivated successfully.
Jan 29 11:56:33 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 29 11:56:33 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 29 11:56:33 np0005601226 podman[118990]: 2026-01-29 16:56:33.036303634 +0000 UTC m=+0.763843179 container attach a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:56:33 np0005601226 podman[118990]: 2026-01-29 16:56:33.037339032 +0000 UTC m=+0.764878547 container died a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 11:56:33 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 29 11:56:33 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 29 11:56:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 29 11:56:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 29 11:56:35 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 29 11:56:36 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 29 11:56:36 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 29 11:56:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:37 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 29 11:56:38 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 29 11:56:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:39 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 29 11:56:39 np0005601226 python3.9[119214]: ansible-ansible.builtin.service_facts Invoked
Jan 29 11:56:39 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 29 11:56:39 np0005601226 network[119232]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 11:56:39 np0005601226 network[119233]: 'network-scripts' will be removed from distribution in near future.
Jan 29 11:56:39 np0005601226 network[119235]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 11:56:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:39 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 29 11:56:40 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.156776428s
Jan 29 11:56:40 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.156776428s
Jan 29 11:56:40 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.157019138s, txc = 0x55d1d5faf200, txc bytes = 495, txc ios = 1, txc cost = 670495, txc onodes = 2, DB updates = 2, DB bytes = 175, cost max = 95489052 on 2026-01-29T16:50:29.495071+0000, txc max = 104 on 2026-01-29T16:52:04.377811+0000
Jan 29 11:56:40 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e39f54ba25b0260f7b190d5b13f0c20b3d3a1429cc5d906cd38cb94e6adf9b40-merged.mount: Deactivated successfully.
Jan 29 11:56:40 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 29 11:56:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 29 11:56:40 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 29 11:56:40 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 29 11:56:40 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 29 11:56:40 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 29 11:56:40 np0005601226 ceph-osd[85858]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.372992039s, txc = 0x55d1d604bb00, txc bytes = 1340, txc ios = 1, txc cost = 671340, txc onodes = 0, DB updates = 3, DB bytes = 1088, cost max = 95489052 on 2026-01-29T16:50:29.495071+0000, txc max = 104 on 2026-01-29T16:52:04.377811+0000
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:56:40
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'volumes', 'images']
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:56:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:56:41 np0005601226 podman[118990]: 2026-01-29 16:56:41.306428698 +0000 UTC m=+9.033968243 container remove a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:56:41 np0005601226 systemd[1]: libpod-conmon-a0a00aed5838b8e25d5660cfeb599e3dcecd07e03292d102550a840c5884b0bc.scope: Deactivated successfully.
Jan 29 11:56:41 np0005601226 podman[119334]: 2026-01-29 16:56:41.471257117 +0000 UTC m=+0.034724203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:56:41 np0005601226 podman[119334]: 2026-01-29 16:56:41.878381449 +0000 UTC m=+0.441848455 container create eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 11:56:42 np0005601226 systemd[1]: Started libpod-conmon-eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686.scope.
Jan 29 11:56:42 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:56:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcaa52ae62ca5c0f114475bdf3f4bddb8c3d2c3b280f9cfa427a28ef832dd21e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcaa52ae62ca5c0f114475bdf3f4bddb8c3d2c3b280f9cfa427a28ef832dd21e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcaa52ae62ca5c0f114475bdf3f4bddb8c3d2c3b280f9cfa427a28ef832dd21e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcaa52ae62ca5c0f114475bdf3f4bddb8c3d2c3b280f9cfa427a28ef832dd21e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:56:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:42 np0005601226 podman[119334]: 2026-01-29 16:56:42.512121371 +0000 UTC m=+1.075588417 container init eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:56:42 np0005601226 podman[119334]: 2026-01-29 16:56:42.518989745 +0000 UTC m=+1.082456751 container start eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 11:56:42 np0005601226 podman[119334]: 2026-01-29 16:56:42.747818964 +0000 UTC m=+1.311286010 container attach eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:56:43 np0005601226 lvm[119521]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:56:43 np0005601226 lvm[119521]: VG ceph_vg0 finished
Jan 29 11:56:43 np0005601226 lvm[119528]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:56:43 np0005601226 lvm[119528]: VG ceph_vg1 finished
Jan 29 11:56:43 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 29 11:56:43 np0005601226 lvm[119549]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:56:43 np0005601226 lvm[119549]: VG ceph_vg2 finished
Jan 29 11:56:43 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 29 11:56:43 np0005601226 upbeat_shirley[119357]: {}
Jan 29 11:56:43 np0005601226 systemd[1]: libpod-eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686.scope: Deactivated successfully.
Jan 29 11:56:43 np0005601226 systemd[1]: libpod-eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686.scope: Consumed 1.046s CPU time.
Jan 29 11:56:43 np0005601226 podman[119334]: 2026-01-29 16:56:43.356738449 +0000 UTC m=+1.920205445 container died eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:56:43 np0005601226 python3.9[119604]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:43 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 29 11:56:43 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 29 11:56:44 np0005601226 python3.9[119694]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:44 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 29 11:56:44 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 29 11:56:44 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fcaa52ae62ca5c0f114475bdf3f4bddb8c3d2c3b280f9cfa427a28ef832dd21e-merged.mount: Deactivated successfully.
Jan 29 11:56:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:44 np0005601226 podman[119334]: 2026-01-29 16:56:44.773160575 +0000 UTC m=+3.336627571 container remove eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shirley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:56:44 np0005601226 python3.9[119846]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:44 np0005601226 systemd[1]: libpod-conmon-eb14eb0b2ab30679108d1aa33107ecf7a8194a017e622619d297a516a2439686.scope: Deactivated successfully.
Jan 29 11:56:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:56:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:56:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:56:45 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 29 11:56:45 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 29 11:56:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:56:45 np0005601226 python3.9[119998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:46 np0005601226 python3.9[120101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:56:46 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:56:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:47 np0005601226 python3.9[120253]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 29 11:56:47 np0005601226 systemd[1]: Starting Time & Date Service...
Jan 29 11:56:47 np0005601226 systemd[1]: Started Time & Date Service.
Jan 29 11:56:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 29 11:56:47 np0005601226 python3.9[120409]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:47 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 29 11:56:47 np0005601226 ceph-osd[87958]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 29 11:56:48 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 29 11:56:48 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 29 11:56:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:48 np0005601226 python3.9[120561]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:48 np0005601226 ceph-mon[75233]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 29 11:56:48 np0005601226 python3.9[120639]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:49 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 29 11:56:49 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 29 11:56:49 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 29 11:56:49 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 29 11:56:49 np0005601226 python3.9[120791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:49 np0005601226 python3.9[120869]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.cez0adfa recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 29 11:56:50 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 29 11:56:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:50 np0005601226 python3.9[121021]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:56:51 np0005601226 python3.9[121099]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:56:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:56:51 np0005601226 python3.9[121251]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:56:52 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 29 11:56:52 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 29 11:56:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:52 np0005601226 python3[121404]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 11:56:53 np0005601226 python3.9[121556]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:53 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 29 11:56:53 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 29 11:56:53 np0005601226 python3.9[121634]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:54 np0005601226 python3.9[121786]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:56:55 np0005601226 python3.9[121911]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705813.6569283-308-72455939213585/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:55 np0005601226 python3.9[122063]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:56 np0005601226 python3.9[122141]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:57 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 29 11:56:57 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 29 11:56:57 np0005601226 python3.9[122293]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:57 np0005601226 python3.9[122371]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:58 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 29 11:56:58 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 29 11:56:58 np0005601226 python3.9[122523]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:56:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:56:58 np0005601226 python3.9[122601]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:56:59 np0005601226 python3.9[122753]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:57:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:00 np0005601226 python3.9[122908]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:01 np0005601226 python3.9[123060]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:01 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 29 11:57:01 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 29 11:57:01 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 29 11:57:01 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 29 11:57:01 np0005601226 python3.9[123212]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:02 np0005601226 python3.9[123364]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 11:57:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:02 np0005601226 python3.9[123516]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 29 11:57:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 29 11:57:03 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 29 11:57:03 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 29 11:57:03 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 29 11:57:03 np0005601226 systemd[1]: session-40.scope: Deactivated successfully.
Jan 29 11:57:03 np0005601226 systemd[1]: session-40.scope: Consumed 24.794s CPU time.
Jan 29 11:57:03 np0005601226 systemd-logind[823]: Session 40 logged out. Waiting for processes to exit.
Jan 29 11:57:03 np0005601226 systemd-logind[823]: Removed session 40.
Jan 29 11:57:04 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 29 11:57:04 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 29 11:57:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:05 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 29 11:57:05 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 29 11:57:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:06 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 29 11:57:06 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 29 11:57:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:07 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 29 11:57:07 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 29 11:57:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:08 np0005601226 systemd-logind[823]: New session 41 of user zuul.
Jan 29 11:57:09 np0005601226 systemd[1]: Started Session 41 of User zuul.
Jan 29 11:57:09 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 29 11:57:09 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 29 11:57:10 np0005601226 python3.9[123696]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 29 11:57:10 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 29 11:57:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:10 np0005601226 ceph-osd[85858]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 29 11:57:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:57:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:57:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:57:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:57:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:57:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:57:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:11 np0005601226 python3.9[123848]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:57:11 np0005601226 python3.9[124002]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 29 11:57:12 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 29 11:57:12 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 29 11:57:12 np0005601226 python3.9[124154]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.wwdwo88_ follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:57:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:13 np0005601226 python3.9[124279]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.wwdwo88_ mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705832.0044045-44-140786397531906/.source.wwdwo88_ _original_basename=.5lqwlkti follow=False checksum=d10fe1246d3b0c4cf121a2aacbcf6675c8f764fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 29 11:57:14 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 29 11:57:14 np0005601226 python3.9[124431]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:57:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:15 np0005601226 python3.9[124583]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCd5zNwTk49BkWGJNPDDV/sc8hC/1zDCe6Dm5iJkZiaTTx9YpkhKdCOUrRj90bot3wB6xIO/H2DSoKOkeo0As62fzH0xHF53uU6JNXvb6euOPWbHiiMCNCjWX81oYAcHSE7UJNEQ8Di2mIFdZ+lWYVfbouhGZTWyrOaad7D3ObU5w0nYF3Svd9NoM+yhNM4TjxbbH653CR5t/oLqngocrbaNwcIsYjSEpqRSHKsB/r7XElll0nOrcsJ+7ZpBcNsu8N3YnkrqBCwWiEJE0cPWTbnwdP3Wy/VTksjGbm2TK6WnQTlO4S36fL5UpagzyDSbcmKBR//t5LKlm+WfzAo6YaZvVpXPjdNnv7I6TMmtAK2Kn3hLtVI01JGwvN4H+Wd1NI9eDwujizBCnN/52nuEaGmPxFCXZeuvWEwweoQrRDzowSQmS4sPw2vTsgxQjeVHBvbqfgOYyHyoImdEsi0xSRY+hKri8iN+bsUbpSpN5Dks+Uuf35l1VvxjLuEdIIBKQ0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIcaeXdS3luBZy5m5YYRna/udoQoiERyfOY7P4nannEI#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFRUXqzSTh9ejcnCJvsqBSbF8l/qFP5rg9YVnq3dh578B8Ap3mLftPcCgZC4ZF9/O1SPID31RHc0Pa6BgTTSBl0=#012 create=True mode=0644 path=/tmp/ansible.wwdwo88_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:16 np0005601226 python3.9[124735]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.wwdwo88_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:57:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:17 np0005601226 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 29 11:57:17 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 29 11:57:17 np0005601226 python3.9[124889]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.wwdwo88_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:17 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 29 11:57:17 np0005601226 systemd[1]: session-41.scope: Deactivated successfully.
Jan 29 11:57:17 np0005601226 systemd[1]: session-41.scope: Consumed 4.586s CPU time.
Jan 29 11:57:17 np0005601226 systemd-logind[823]: Session 41 logged out. Waiting for processes to exit.
Jan 29 11:57:17 np0005601226 systemd-logind[823]: Removed session 41.
Jan 29 11:57:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:19 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 29 11:57:19 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 29 11:57:20 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 29 11:57:20 np0005601226 ceph-osd[86917]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 29 11:57:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:23 np0005601226 systemd-logind[823]: New session 42 of user zuul.
Jan 29 11:57:23 np0005601226 systemd[1]: Started Session 42 of User zuul.
Jan 29 11:57:24 np0005601226 python3.9[125069]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:57:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:25 np0005601226 python3.9[125225]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 29 11:57:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:26 np0005601226 python3.9[125379]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 11:57:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:27 np0005601226 python3.9[125532]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:57:27 np0005601226 python3.9[125685]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:57:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:28 np0005601226 python3.9[125837]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:29 np0005601226 systemd-logind[823]: Session 42 logged out. Waiting for processes to exit.
Jan 29 11:57:29 np0005601226 systemd[1]: session-42.scope: Deactivated successfully.
Jan 29 11:57:29 np0005601226 systemd[1]: session-42.scope: Consumed 3.352s CPU time.
Jan 29 11:57:29 np0005601226 systemd-logind[823]: Removed session 42.
Jan 29 11:57:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:34 np0005601226 systemd-logind[823]: New session 43 of user zuul.
Jan 29 11:57:34 np0005601226 systemd[1]: Started Session 43 of User zuul.
Jan 29 11:57:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:35 np0005601226 python3.9[126015]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:57:35 np0005601226 systemd[1]: session-18.scope: Deactivated successfully.
Jan 29 11:57:35 np0005601226 systemd[1]: session-18.scope: Consumed 1min 32.125s CPU time.
Jan 29 11:57:35 np0005601226 systemd-logind[823]: Session 18 logged out. Waiting for processes to exit.
Jan 29 11:57:35 np0005601226 systemd-logind[823]: Removed session 18.
Jan 29 11:57:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:36 np0005601226 python3.9[126171]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:57:37 np0005601226 python3.9[126255]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 29 11:57:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:40 np0005601226 python3.9[126406]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:57:40
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.control', 'images', 'default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data']
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:57:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:57:41 np0005601226 python3.9[126557]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 11:57:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:41 np0005601226 python3.9[126707]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:57:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:42 np0005601226 python3.9[126857]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 11:57:43 np0005601226 systemd[1]: session-43.scope: Deactivated successfully.
Jan 29 11:57:43 np0005601226 systemd[1]: session-43.scope: Consumed 5.362s CPU time.
Jan 29 11:57:43 np0005601226 systemd-logind[823]: Session 43 logged out. Waiting for processes to exit.
Jan 29 11:57:43 np0005601226 systemd-logind[823]: Removed session 43.
Jan 29 11:57:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:57:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:57:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:46 np0005601226 podman[127027]: 2026-01-29 16:57:46.668613361 +0000 UTC m=+0.020015060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:57:47 np0005601226 podman[127027]: 2026-01-29 16:57:47.180129712 +0000 UTC m=+0.531531401 container create b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bhabha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 11:57:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:57:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:57:47 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:57:47 np0005601226 systemd[1]: Started libpod-conmon-b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d.scope.
Jan 29 11:57:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:57:47 np0005601226 podman[127027]: 2026-01-29 16:57:47.46127795 +0000 UTC m=+0.812679619 container init b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:57:47 np0005601226 podman[127027]: 2026-01-29 16:57:47.470809231 +0000 UTC m=+0.822210920 container start b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bhabha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:57:47 np0005601226 inspiring_bhabha[127044]: 167 167
Jan 29 11:57:47 np0005601226 systemd[1]: libpod-b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d.scope: Deactivated successfully.
Jan 29 11:57:47 np0005601226 podman[127027]: 2026-01-29 16:57:47.657842892 +0000 UTC m=+1.009244541 container attach b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 11:57:47 np0005601226 podman[127027]: 2026-01-29 16:57:47.659145449 +0000 UTC m=+1.010547108 container died b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bhabha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:57:47 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d8fc49b593ba77867860ec7c07ea4e7d231dafc08d72d32eaeec1688778c272e-merged.mount: Deactivated successfully.
Jan 29 11:57:47 np0005601226 podman[127027]: 2026-01-29 16:57:47.883839599 +0000 UTC m=+1.235241258 container remove b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 11:57:47 np0005601226 systemd[1]: libpod-conmon-b586ff45a8bf1270540280ce52ae10f466e6192958c88d18260547909e105c9d.scope: Deactivated successfully.
Jan 29 11:57:48 np0005601226 podman[127068]: 2026-01-29 16:57:48.065441666 +0000 UTC m=+0.102052254 container create 9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:57:48 np0005601226 podman[127068]: 2026-01-29 16:57:47.982821816 +0000 UTC m=+0.019432434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:57:48 np0005601226 systemd[1]: Started libpod-conmon-9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48.scope.
Jan 29 11:57:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:57:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d12c146651d1278674eaacc74beec29970e5c8afe5f113967d3d180c9510d8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d12c146651d1278674eaacc74beec29970e5c8afe5f113967d3d180c9510d8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d12c146651d1278674eaacc74beec29970e5c8afe5f113967d3d180c9510d8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d12c146651d1278674eaacc74beec29970e5c8afe5f113967d3d180c9510d8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d12c146651d1278674eaacc74beec29970e5c8afe5f113967d3d180c9510d8e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:48 np0005601226 podman[127068]: 2026-01-29 16:57:48.167703315 +0000 UTC m=+0.204313933 container init 9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:57:48 np0005601226 podman[127068]: 2026-01-29 16:57:48.172553892 +0000 UTC m=+0.209164480 container start 9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hofstadter, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 29 11:57:48 np0005601226 podman[127068]: 2026-01-29 16:57:48.228763121 +0000 UTC m=+0.265373719 container attach 9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 11:57:48 np0005601226 systemd-logind[823]: New session 44 of user zuul.
Jan 29 11:57:48 np0005601226 systemd[1]: Started Session 44 of User zuul.
Jan 29 11:57:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:48 np0005601226 blissful_hofstadter[127085]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:57:48 np0005601226 blissful_hofstadter[127085]: --> All data devices are unavailable
Jan 29 11:57:48 np0005601226 systemd[1]: libpod-9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48.scope: Deactivated successfully.
Jan 29 11:57:48 np0005601226 podman[127068]: 2026-01-29 16:57:48.611066416 +0000 UTC m=+0.647677004 container died 9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hofstadter, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:57:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0d12c146651d1278674eaacc74beec29970e5c8afe5f113967d3d180c9510d8e-merged.mount: Deactivated successfully.
Jan 29 11:57:48 np0005601226 podman[127068]: 2026-01-29 16:57:48.748858645 +0000 UTC m=+0.785469233 container remove 9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 11:57:48 np0005601226 systemd[1]: libpod-conmon-9be2cc26b2467307bf8d2c95526686f13f0dbd5abcaf960fc8629cbfeb9f7e48.scope: Deactivated successfully.
Jan 29 11:57:49 np0005601226 podman[127332]: 2026-01-29 16:57:49.12448663 +0000 UTC m=+0.032252669 container create 9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_knuth, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 11:57:49 np0005601226 systemd[1]: Started libpod-conmon-9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c.scope.
Jan 29 11:57:49 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:57:49 np0005601226 podman[127332]: 2026-01-29 16:57:49.20146943 +0000 UTC m=+0.109235489 container init 9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_knuth, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 11:57:49 np0005601226 podman[127332]: 2026-01-29 16:57:49.108493465 +0000 UTC m=+0.016259534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:57:49 np0005601226 podman[127332]: 2026-01-29 16:57:49.206953426 +0000 UTC m=+0.114719465 container start 9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 11:57:49 np0005601226 beautiful_knuth[127348]: 167 167
Jan 29 11:57:49 np0005601226 systemd[1]: libpod-9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c.scope: Deactivated successfully.
Jan 29 11:57:49 np0005601226 podman[127332]: 2026-01-29 16:57:49.214102029 +0000 UTC m=+0.121868088 container attach 9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 11:57:49 np0005601226 podman[127332]: 2026-01-29 16:57:49.215637333 +0000 UTC m=+0.123403372 container died 9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 11:57:49 np0005601226 python3.9[127319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:57:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4d5de2cdd26bf64da655d0de68c719f56221aa97a803c4ba6148bbb17f6dff07-merged.mount: Deactivated successfully.
Jan 29 11:57:49 np0005601226 podman[127332]: 2026-01-29 16:57:49.274835106 +0000 UTC m=+0.182601155 container remove 9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_knuth, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:57:49 np0005601226 systemd[1]: libpod-conmon-9224775b7f1d0c43fe6f7155332e2f54390e910b9b409af830de745a8406413c.scope: Deactivated successfully.
Jan 29 11:57:49 np0005601226 podman[127378]: 2026-01-29 16:57:49.398871695 +0000 UTC m=+0.041306626 container create 22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:57:49 np0005601226 systemd[1]: Started libpod-conmon-22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c.scope.
Jan 29 11:57:49 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:57:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0be4615e30128a2e9954d2255d15e85a56cb4d8f5ac233adaf8c747f66a8ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0be4615e30128a2e9954d2255d15e85a56cb4d8f5ac233adaf8c747f66a8ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0be4615e30128a2e9954d2255d15e85a56cb4d8f5ac233adaf8c747f66a8ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:49 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0be4615e30128a2e9954d2255d15e85a56cb4d8f5ac233adaf8c747f66a8ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:49 np0005601226 podman[127378]: 2026-01-29 16:57:49.378787654 +0000 UTC m=+0.021222605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:57:49 np0005601226 podman[127378]: 2026-01-29 16:57:49.571897068 +0000 UTC m=+0.214331999 container init 22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 11:57:49 np0005601226 podman[127378]: 2026-01-29 16:57:49.579922605 +0000 UTC m=+0.222357536 container start 22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:57:49 np0005601226 podman[127378]: 2026-01-29 16:57:49.59166651 +0000 UTC m=+0.234101431 container attach 22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:57:49 np0005601226 jovial_moser[127394]: {
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:    "0": [
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:        {
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "devices": [
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "/dev/loop3"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            ],
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_name": "ceph_lv0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_size": "21470642176",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "name": "ceph_lv0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "tags": {
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cluster_name": "ceph",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.crush_device_class": "",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.encrypted": "0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.objectstore": "bluestore",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osd_id": "0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.type": "block",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.vdo": "0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.with_tpm": "0"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            },
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "type": "block",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "vg_name": "ceph_vg0"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:        }
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:    ],
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:    "1": [
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:        {
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "devices": [
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "/dev/loop4"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            ],
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_name": "ceph_lv1",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_size": "21470642176",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "name": "ceph_lv1",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "tags": {
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cluster_name": "ceph",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.crush_device_class": "",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.encrypted": "0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.objectstore": "bluestore",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osd_id": "1",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.type": "block",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.vdo": "0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.with_tpm": "0"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            },
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "type": "block",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "vg_name": "ceph_vg1"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:        }
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:    ],
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:    "2": [
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:        {
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "devices": [
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "/dev/loop5"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            ],
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_name": "ceph_lv2",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_size": "21470642176",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "name": "ceph_lv2",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "tags": {
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cephx_lockbox_secret": "",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.cluster_name": "ceph",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.crush_device_class": "",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.encrypted": "0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.objectstore": "bluestore",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osd_id": "2",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.type": "block",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.vdo": "0",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:                "ceph.with_tpm": "0"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            },
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "type": "block",
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:            "vg_name": "ceph_vg2"
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:        }
Jan 29 11:57:49 np0005601226 jovial_moser[127394]:    ]
Jan 29 11:57:49 np0005601226 jovial_moser[127394]: }
Jan 29 11:57:49 np0005601226 systemd[1]: libpod-22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c.scope: Deactivated successfully.
Jan 29 11:57:49 np0005601226 podman[127378]: 2026-01-29 16:57:49.847835447 +0000 UTC m=+0.490270368 container died 22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:57:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1b0be4615e30128a2e9954d2255d15e85a56cb4d8f5ac233adaf8c747f66a8ec-merged.mount: Deactivated successfully.
Jan 29 11:57:50 np0005601226 podman[127378]: 2026-01-29 16:57:50.348293562 +0000 UTC m=+0.990728503 container remove 22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 11:57:50 np0005601226 systemd[1]: libpod-conmon-22055ed19eaf9b563e1857a2ec451f1177b9c90de50a3a1fd692e2edc34c971c.scope: Deactivated successfully.
Jan 29 11:57:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:50 np0005601226 podman[127584]: 2026-01-29 16:57:50.688467938 +0000 UTC m=+0.020935876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:57:50 np0005601226 podman[127584]: 2026-01-29 16:57:50.815191574 +0000 UTC m=+0.147659502 container create 488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 11:57:50 np0005601226 python3.9[127642]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:57:51 np0005601226 systemd[1]: Started libpod-conmon-488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84.scope.
Jan 29 11:57:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:57:51 np0005601226 podman[127584]: 2026-01-29 16:57:51.077632929 +0000 UTC m=+0.410100877 container init 488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 11:57:51 np0005601226 podman[127584]: 2026-01-29 16:57:51.082707933 +0000 UTC m=+0.415175821 container start 488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:57:51 np0005601226 strange_franklin[127668]: 167 167
Jan 29 11:57:51 np0005601226 systemd[1]: libpod-488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84.scope: Deactivated successfully.
Jan 29 11:57:51 np0005601226 podman[127584]: 2026-01-29 16:57:51.099497331 +0000 UTC m=+0.431965239 container attach 488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 11:57:51 np0005601226 podman[127584]: 2026-01-29 16:57:51.100218821 +0000 UTC m=+0.432686729 container died 488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 11:57:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a29d208687f569ab85b341a7de385432447914fdfe4bd8ad43ba74715ab25374-merged.mount: Deactivated successfully.
Jan 29 11:57:51 np0005601226 podman[127584]: 2026-01-29 16:57:51.178486277 +0000 UTC m=+0.510954175 container remove 488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:57:51 np0005601226 systemd[1]: libpod-conmon-488d32fbe0964314e33ffa99f80df851442e4044ba5f96bff17a415ce66e0a84.scope: Deactivated successfully.
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:57:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:57:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:51 np0005601226 podman[127818]: 2026-01-29 16:57:51.332716395 +0000 UTC m=+0.052734601 container create 1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 11:57:51 np0005601226 systemd[1]: Started libpod-conmon-1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508.scope.
Jan 29 11:57:51 np0005601226 podman[127818]: 2026-01-29 16:57:51.304189473 +0000 UTC m=+0.024207729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:57:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:57:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a572987a17ed78219ebe68ebb38802c42127f0cd00d373aaa34ccb18aa05aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a572987a17ed78219ebe68ebb38802c42127f0cd00d373aaa34ccb18aa05aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a572987a17ed78219ebe68ebb38802c42127f0cd00d373aaa34ccb18aa05aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a572987a17ed78219ebe68ebb38802c42127f0cd00d373aaa34ccb18aa05aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:57:51 np0005601226 podman[127818]: 2026-01-29 16:57:51.451615197 +0000 UTC m=+0.171633403 container init 1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swartz, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 11:57:51 np0005601226 podman[127818]: 2026-01-29 16:57:51.456880447 +0000 UTC m=+0.176898653 container start 1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 11:57:51 np0005601226 podman[127818]: 2026-01-29 16:57:51.483633458 +0000 UTC m=+0.203651694 container attach 1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 29 11:57:51 np0005601226 python3.9[127833]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:57:52 np0005601226 lvm[128046]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 11:57:52 np0005601226 lvm[128046]: VG ceph_vg0 finished
Jan 29 11:57:52 np0005601226 lvm[128064]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 11:57:52 np0005601226 lvm[128064]: VG ceph_vg1 finished
Jan 29 11:57:52 np0005601226 lvm[128071]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 11:57:52 np0005601226 lvm[128071]: VG ceph_vg2 finished
Jan 29 11:57:52 np0005601226 epic_swartz[127838]: {}
Jan 29 11:57:52 np0005601226 systemd[1]: libpod-1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508.scope: Deactivated successfully.
Jan 29 11:57:52 np0005601226 podman[128074]: 2026-01-29 16:57:52.23602493 +0000 UTC m=+0.025961459 container died 1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 11:57:52 np0005601226 python3.9[128069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:57:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-36a572987a17ed78219ebe68ebb38802c42127f0cd00d373aaa34ccb18aa05aa-merged.mount: Deactivated successfully.
Jan 29 11:57:52 np0005601226 podman[128074]: 2026-01-29 16:57:52.274801663 +0000 UTC m=+0.064738182 container remove 1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swartz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 11:57:52 np0005601226 systemd[1]: libpod-conmon-1b9926c0eb4ec483299c64b4b081f05f4a72fdb0d0dae2f3ab9d639eec07b508.scope: Deactivated successfully.
Jan 29 11:57:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:57:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:57:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:57:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:57:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:52 np0005601226 python3.9[128237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705871.6935687-60-80657801372881/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d4a1b85252f07fad9de73c4218ff3eabbaf852e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:57:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:57:53 np0005601226 python3.9[128389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:57:53 np0005601226 python3.9[128512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705873.0582435-60-22432985641953/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=84ae32ce076c1cfe7acb013df6a24b80036b26fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:54 np0005601226 python3.9[128664]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:57:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:54 np0005601226 python3.9[128787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705874.0276637-60-103916241559119/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=74f0b7ecc6e485945eb1c7c7e8c77d170fad494b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:56 np0005601226 python3.9[128939]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:57:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:57:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:56 np0005601226 python3.9[129091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:57:57 np0005601226 python3.9[129243]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:57:58 np0005601226 python3.9[129366]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705877.0467112-119-269291917974047/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=f5b290d3e37d50d6e009107d46db9043685228dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:57:58 np0005601226 python3.9[129518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:57:59 np0005601226 python3.9[129641]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705878.1410453-119-17698315938880/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=919165b571ee8b5575b393ac525dafb6cd394639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:57:59 np0005601226 python3.9[129793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:00 np0005601226 python3.9[129916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705879.1509705-119-103623033001583/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ea402033a1787aa655bda2682eec5868fe94e99b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:00 np0005601226 python3.9[130068]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:01 np0005601226 python3.9[130220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:01 np0005601226 python3.9[130372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:02 np0005601226 python3.9[130495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705881.4301991-178-231686252665898/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=eb9eaca494a72aee60de01ddab73811bf580e73c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:02 np0005601226 python3.9[130647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:03 np0005601226 python3.9[130770]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705882.4483724-178-162333139579667/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=919165b571ee8b5575b393ac525dafb6cd394639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:03 np0005601226 python3.9[130922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:04 np0005601226 python3.9[131045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705883.4502814-178-73738027782887/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=136d76ea15cce9e950749e35df723942c22fe2bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:05 np0005601226 python3.9[131197]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:06 np0005601226 python3.9[131349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:06 np0005601226 python3.9[131472]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705885.584962-246-78005851740323/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=f87409c7a2bcf84eee086b0818eff77723c67465 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:07 np0005601226 python3.9[131624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:07 np0005601226 python3.9[131776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:08 np0005601226 python3.9[131899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705887.3395703-270-105342290743659/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=f87409c7a2bcf84eee086b0818eff77723c67465 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:08 np0005601226 python3.9[132051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:09 np0005601226 python3.9[132203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:09 np0005601226 python3.9[132326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705889.0857468-294-88535361916557/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=f87409c7a2bcf84eee086b0818eff77723c67465 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:58:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:58:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:58:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:58:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:58:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:58:10 np0005601226 python3.9[132478]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:11 np0005601226 python3.9[132630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:11 np0005601226 python3.9[132753]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705890.7379217-318-149050004866651/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=f87409c7a2bcf84eee086b0818eff77723c67465 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:12 np0005601226 python3.9[132905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:12 np0005601226 python3.9[133057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:13 np0005601226 python3.9[133180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705892.321119-342-94816889407316/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=f87409c7a2bcf84eee086b0818eff77723c67465 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:13 np0005601226 python3.9[133332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:14 np0005601226 python3.9[133484]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:14 np0005601226 python3.9[133607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705893.968738-366-106200850634880/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=f87409c7a2bcf84eee086b0818eff77723c67465 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:15 np0005601226 systemd-logind[823]: Session 44 logged out. Waiting for processes to exit.
Jan 29 11:58:15 np0005601226 systemd[1]: session-44.scope: Deactivated successfully.
Jan 29 11:58:15 np0005601226 systemd[1]: session-44.scope: Consumed 18.508s CPU time.
Jan 29 11:58:15 np0005601226 systemd-logind[823]: Removed session 44.
Jan 29 11:58:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:21 np0005601226 systemd-logind[823]: New session 45 of user zuul.
Jan 29 11:58:21 np0005601226 systemd[1]: Started Session 45 of User zuul.
Jan 29 11:58:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:21 np0005601226 python3.9[133787]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:22 np0005601226 python3.9[133939]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:23 np0005601226 python3.9[134062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705902.1004174-29-185958163565315/.source.conf _original_basename=ceph.conf follow=False checksum=e5302a1399e9ee67bc71b43982983a02b46e7ac5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:23 np0005601226 python3.9[134214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:24 np0005601226 python3.9[134337]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705903.5323784-29-156380332733393/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=06b885591518abc5ff796737c70f725941229789 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:24 np0005601226 systemd[1]: session-45.scope: Deactivated successfully.
Jan 29 11:58:24 np0005601226 systemd[1]: session-45.scope: Consumed 2.252s CPU time.
Jan 29 11:58:24 np0005601226 systemd-logind[823]: Session 45 logged out. Waiting for processes to exit.
Jan 29 11:58:24 np0005601226 systemd-logind[823]: Removed session 45.
Jan 29 11:58:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:30 np0005601226 systemd-logind[823]: New session 46 of user zuul.
Jan 29 11:58:30 np0005601226 systemd[1]: Started Session 46 of User zuul.
Jan 29 11:58:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:31 np0005601226 python3.9[134515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:58:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:32 np0005601226 python3.9[134671]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:32 np0005601226 python3.9[134823]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:58:33 np0005601226 python3.9[134973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:58:34 np0005601226 python3.9[135125]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 29 11:58:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:35 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 29 11:58:36 np0005601226 python3.9[135281]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:58:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:36 np0005601226 python3.9[135365]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:58:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:39 np0005601226 python3.9[135518]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 11:58:39 np0005601226 python3[135673]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:58:40
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'backups']
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:58:40 np0005601226 python3.9[135825]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:58:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.354742) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705921354782, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1343, "num_deletes": 251, "total_data_size": 2009891, "memory_usage": 2045752, "flush_reason": "Manual Compaction"}
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705921365330, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1191009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7508, "largest_seqno": 8850, "table_properties": {"data_size": 1186227, "index_size": 2050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13594, "raw_average_key_size": 20, "raw_value_size": 1175217, "raw_average_value_size": 1794, "num_data_blocks": 96, "num_entries": 655, "num_filter_entries": 655, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705789, "oldest_key_time": 1769705789, "file_creation_time": 1769705921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 10650 microseconds, and 3069 cpu microseconds.
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.365391) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1191009 bytes OK
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.365408) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.367101) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.367117) EVENT_LOG_v1 {"time_micros": 1769705921367112, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.367136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2003671, prev total WAL file size 2003671, number of live WAL files 2.
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.367627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1163KB)], [20(8298KB)]
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705921367678, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9688365, "oldest_snapshot_seqno": -1}
Jan 29 11:58:41 np0005601226 python3.9[135977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3368 keys, 7482709 bytes, temperature: kUnknown
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705921428609, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7482709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7456181, "index_size": 17010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81296, "raw_average_key_size": 24, "raw_value_size": 7391308, "raw_average_value_size": 2194, "num_data_blocks": 750, "num_entries": 3368, "num_filter_entries": 3368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769705921, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.428812) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7482709 bytes
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.465882) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.8 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(14.4) write-amplify(6.3) OK, records in: 3824, records dropped: 456 output_compression: NoCompression
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.465933) EVENT_LOG_v1 {"time_micros": 1769705921465912, "job": 6, "event": "compaction_finished", "compaction_time_micros": 60995, "compaction_time_cpu_micros": 21709, "output_level": 6, "num_output_files": 1, "total_output_size": 7482709, "num_input_records": 3824, "num_output_records": 3368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705921466335, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769705921467957, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.367537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.467999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.468005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.468008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.468011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 11:58:41 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-16:58:41.468014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 11:58:41 np0005601226 python3.9[136055]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:42 np0005601226 python3.9[136207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:42 np0005601226 python3.9[136285]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3c8_cu5s recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:43 np0005601226 python3.9[136437]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:43 np0005601226 python3.9[136515]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:44 np0005601226 python3.9[136667]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:58:45 np0005601226 python3[136820]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 11:58:46 np0005601226 python3.9[136972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:46 np0005601226 python3.9[137097]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705925.7306926-152-75027634106316/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:47 np0005601226 python3.9[137249]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:48 np0005601226 python3.9[137374]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705927.0408607-167-196806524453527/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:48 np0005601226 python3.9[137526]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:49 np0005601226 python3.9[137651]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705928.2264109-182-250288729372914/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:49 np0005601226 python3.9[137803]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:50 np0005601226 python3.9[137928]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705929.561708-197-193952188583142/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:58:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:58:51 np0005601226 python3.9[138080]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:58:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:58:51 np0005601226 python3.9[138205]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769705930.6417675-212-169427390878556/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:52 np0005601226 python3.9[138357]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 11:58:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 11:58:53 np0005601226 python3.9[138601]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:58:53 np0005601226 podman[138671]: 2026-01-29 16:58:53.290336305 +0000 UTC m=+0.044921687 container create 185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:58:53 np0005601226 systemd[1]: Started libpod-conmon-185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071.scope.
Jan 29 11:58:53 np0005601226 podman[138671]: 2026-01-29 16:58:53.270176361 +0000 UTC m=+0.024761773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:58:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:58:53 np0005601226 podman[138671]: 2026-01-29 16:58:53.44418438 +0000 UTC m=+0.198769842 container init 185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_murdock, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 11:58:53 np0005601226 podman[138671]: 2026-01-29 16:58:53.451123021 +0000 UTC m=+0.205708403 container start 185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 11:58:53 np0005601226 funny_murdock[138720]: 167 167
Jan 29 11:58:53 np0005601226 systemd[1]: libpod-185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071.scope: Deactivated successfully.
Jan 29 11:58:53 np0005601226 podman[138671]: 2026-01-29 16:58:53.457091305 +0000 UTC m=+0.211676727 container attach 185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_murdock, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 11:58:53 np0005601226 podman[138671]: 2026-01-29 16:58:53.457513166 +0000 UTC m=+0.212098588 container died 185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_murdock, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 11:58:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay-70e3d488da45f50f290747f5066984764c1f95a0e666691a1ef47519522af953-merged.mount: Deactivated successfully.
Jan 29 11:58:53 np0005601226 podman[138671]: 2026-01-29 16:58:53.532396597 +0000 UTC m=+0.286981979 container remove 185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_murdock, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 11:58:53 np0005601226 systemd[1]: libpod-conmon-185de38eb33580f864604e5ed22c136ae6e37391c981de02ce36677552c80071.scope: Deactivated successfully.
Jan 29 11:58:53 np0005601226 podman[138776]: 2026-01-29 16:58:53.640955735 +0000 UTC m=+0.035191600 container create 611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 11:58:53 np0005601226 systemd[1]: Started libpod-conmon-611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3.scope.
Jan 29 11:58:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 11:58:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ed17ce0b23f64abfcf426cae7c360bbaea2d73f827f9402764a9d8eeb99eab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 11:58:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ed17ce0b23f64abfcf426cae7c360bbaea2d73f827f9402764a9d8eeb99eab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 11:58:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ed17ce0b23f64abfcf426cae7c360bbaea2d73f827f9402764a9d8eeb99eab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 11:58:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ed17ce0b23f64abfcf426cae7c360bbaea2d73f827f9402764a9d8eeb99eab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 11:58:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8ed17ce0b23f64abfcf426cae7c360bbaea2d73f827f9402764a9d8eeb99eab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 11:58:53 np0005601226 podman[138776]: 2026-01-29 16:58:53.626959079 +0000 UTC m=+0.021194954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 11:58:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 11:58:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:58:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 11:58:53 np0005601226 podman[138776]: 2026-01-29 16:58:53.838443221 +0000 UTC m=+0.232679116 container init 611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 11:58:53 np0005601226 podman[138776]: 2026-01-29 16:58:53.846028689 +0000 UTC m=+0.240264554 container start 611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 11:58:53 np0005601226 python3.9[138870]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:58:53 np0005601226 podman[138776]: 2026-01-29 16:58:53.971381149 +0000 UTC m=+0.365617074 container attach 611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moore, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 11:58:54 np0005601226 amazing_moore[138833]: --> passed data devices: 0 physical, 3 LVM
Jan 29 11:58:54 np0005601226 amazing_moore[138833]: --> All data devices are unavailable
Jan 29 11:58:54 np0005601226 systemd[1]: libpod-611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3.scope: Deactivated successfully.
Jan 29 11:58:54 np0005601226 podman[138776]: 2026-01-29 16:58:54.28243089 +0000 UTC m=+0.676666755 container died 611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moore, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 11:58:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f8ed17ce0b23f64abfcf426cae7c360bbaea2d73f827f9402764a9d8eeb99eab-merged.mount: Deactivated successfully.
Jan 29 11:58:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:58:54 np0005601226 podman[138776]: 2026-01-29 16:58:54.688911177 +0000 UTC m=+1.083147042 container remove 611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 11:58:54 np0005601226 python3.9[139052]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:58:54 np0005601226 systemd[1]: libpod-conmon-611424fc4533ecc80fd6c739a18eac71995281dd8f6dd9f541925f2a0cbe08e3.scope: Deactivated successfully.
Jan 29 11:59:31 np0005601226 kernel: genev_sys_6081: entered promiscuous mode
Jan 29 11:59:31 np0005601226 ovn_controller[145556]: 2026-01-29T16:59:31Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 29 11:59:31 np0005601226 NetworkManager[49020]: <info>  [1769705971.9710] device (genev_sys_6081): carrier: link connected
Jan 29 11:59:31 np0005601226 NetworkManager[49020]: <info>  [1769705971.9716] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 29 11:59:31 np0005601226 ovn_controller[145556]: 2026-01-29T16:59:31Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 29 11:59:32 np0005601226 rsyslogd[1007]: imjournal: 503 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 29 11:59:32 np0005601226 python3.9[145819]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 29 11:59:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:33 np0005601226 python3.9[145971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:59:34 np0005601226 python3.9[146097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769705972.8247414-619-120640323724345/.source.yaml _original_basename=.k001orq8 follow=False checksum=c95aa4b7f9906aff1141eda7ee0609507f5258b5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 11:59:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:34 np0005601226 python3.9[146249]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:59:34 np0005601226 ovs-vsctl[146254]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 29 11:59:35 np0005601226 python3.9[146406]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:59:35 np0005601226 ovs-vsctl[146408]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 29 11:59:35 np0005601226 python3.9[146561]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 11:59:35 np0005601226 ovs-vsctl[146562]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 29 11:59:36 np0005601226 systemd-logind[823]: Session 46 logged out. Waiting for processes to exit.
Jan 29 11:59:36 np0005601226 systemd[1]: session-46.scope: Deactivated successfully.
Jan 29 11:59:36 np0005601226 systemd[1]: session-46.scope: Consumed 48.815s CPU time.
Jan 29 11:59:36 np0005601226 systemd-logind[823]: Removed session 46.
Jan 29 11:59:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:59:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_16:59:40
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:59:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 11:59:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:59:41 np0005601226 systemd[1]: Stopping User Manager for UID 0...
Jan 29 11:59:41 np0005601226 systemd[145595]: Activating special unit Exit the Session...
Jan 29 11:59:41 np0005601226 systemd[145595]: Stopped target Main User Target.
Jan 29 11:59:41 np0005601226 systemd[145595]: Stopped target Basic System.
Jan 29 11:59:41 np0005601226 systemd[145595]: Stopped target Paths.
Jan 29 11:59:41 np0005601226 systemd[145595]: Stopped target Sockets.
Jan 29 11:59:41 np0005601226 systemd[145595]: Stopped target Timers.
Jan 29 11:59:41 np0005601226 systemd[145595]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 29 11:59:41 np0005601226 systemd[145595]: Closed D-Bus User Message Bus Socket.
Jan 29 11:59:41 np0005601226 systemd[145595]: Stopped Create User's Volatile Files and Directories.
Jan 29 11:59:41 np0005601226 systemd[145595]: Removed slice User Application Slice.
Jan 29 11:59:41 np0005601226 systemd[145595]: Reached target Shutdown.
Jan 29 11:59:41 np0005601226 systemd[145595]: Finished Exit the Session.
Jan 29 11:59:41 np0005601226 systemd[145595]: Reached target Exit the Session.
Jan 29 11:59:41 np0005601226 systemd[1]: user@0.service: Deactivated successfully.
Jan 29 11:59:41 np0005601226 systemd[1]: Stopped User Manager for UID 0.
Jan 29 11:59:41 np0005601226 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 29 11:59:41 np0005601226 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 29 11:59:41 np0005601226 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 29 11:59:41 np0005601226 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 29 11:59:41 np0005601226 systemd[1]: Removed slice User Slice of UID 0.
Jan 29 11:59:42 np0005601226 systemd-logind[823]: New session 48 of user zuul.
Jan 29 11:59:42 np0005601226 systemd[1]: Started Session 48 of User zuul.
Jan 29 11:59:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:42 np0005601226 python3.9[146742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:59:44 np0005601226 python3.9[146898]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:44 np0005601226 python3.9[147050]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:45 np0005601226 python3.9[147202]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:45 np0005601226 python3.9[147354]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:46 np0005601226 python3.9[147506]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:59:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:46 np0005601226 python3.9[147656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 11:59:47 np0005601226 python3.9[147808]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 29 11:59:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:48 np0005601226 python3.9[147958]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:59:49 np0005601226 python3.9[148079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705988.3134897-81-100029257352618/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:50 np0005601226 python3.9[148230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:59:50 np0005601226 python3.9[148351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705989.615168-96-121524796181328/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 11:59:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 11:59:51 np0005601226 python3.9[148503]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 11:59:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:59:52 np0005601226 python3.9[148587]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 11:59:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:54 np0005601226 python3.9[148740]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 11:59:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:54 np0005601226 python3.9[148893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:59:55 np0005601226 python3.9[149014]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705994.562547-133-251845564260357/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:56 np0005601226 python3.9[149164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:59:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 11:59:56 np0005601226 python3.9[149285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705995.7053354-133-42999454994641/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:57 np0005601226 python3.9[149435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:59:58 np0005601226 python3.9[149556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705997.2127676-177-186239462220730/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 11:59:58 np0005601226 python3.9[149706]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 11:59:59 np0005601226 python3.9[149827]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769705998.2390683-177-47434470674342/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 11:59:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 11:59:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:59:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 11:59:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 11:59:59 np0005601226 python3.9[150029]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:00:00 np0005601226 python3.9[150282]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:00:00 np0005601226 podman[150410]: 2026-01-29 17:00:00.477547847 +0000 UTC m=+0.038472596 container create ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keldysh, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:00:00 np0005601226 systemd[1]: Started libpod-conmon-ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234.scope.
Jan 29 12:00:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:00:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:00 np0005601226 podman[150410]: 2026-01-29 17:00:00.458684673 +0000 UTC m=+0.019609462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:00:00 np0005601226 podman[150410]: 2026-01-29 17:00:00.61364118 +0000 UTC m=+0.174565939 container init ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keldysh, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:00:00 np0005601226 podman[150410]: 2026-01-29 17:00:00.619095083 +0000 UTC m=+0.180019822 container start ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keldysh, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:00:00 np0005601226 beautiful_keldysh[150461]: 167 167
Jan 29 12:00:00 np0005601226 systemd[1]: libpod-ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234.scope: Deactivated successfully.
Jan 29 12:00:00 np0005601226 podman[150410]: 2026-01-29 17:00:00.668462614 +0000 UTC m=+0.229387393 container attach ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:00:00 np0005601226 podman[150410]: 2026-01-29 17:00:00.669498557 +0000 UTC m=+0.230423296 container died ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:00:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ab0e8ac06e433ef6f73201f162aeacadd279a6ed61e5a72c3610f904276047ba-merged.mount: Deactivated successfully.
Jan 29 12:00:00 np0005601226 podman[150410]: 2026-01-29 17:00:00.720612678 +0000 UTC m=+0.281537407 container remove ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 12:00:00 np0005601226 systemd[1]: libpod-conmon-ace470fd975b66fcf868b41228d2a80a0dac7302d3802707581a6628ca97c234.scope: Deactivated successfully.
Jan 29 12:00:00 np0005601226 python3.9[150516]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:00 np0005601226 podman[150540]: 2026-01-29 17:00:00.866896219 +0000 UTC m=+0.050220591 container create fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 12:00:00 np0005601226 systemd[1]: Started libpod-conmon-fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c.scope.
Jan 29 12:00:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:00:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad46c56bd093dfad071ed4ff28af787aecb0492718a5ff6d47b7c29cb0b8478/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad46c56bd093dfad071ed4ff28af787aecb0492718a5ff6d47b7c29cb0b8478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad46c56bd093dfad071ed4ff28af787aecb0492718a5ff6d47b7c29cb0b8478/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad46c56bd093dfad071ed4ff28af787aecb0492718a5ff6d47b7c29cb0b8478/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad46c56bd093dfad071ed4ff28af787aecb0492718a5ff6d47b7c29cb0b8478/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:00 np0005601226 podman[150540]: 2026-01-29 17:00:00.840438263 +0000 UTC m=+0.023762665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:00:00 np0005601226 podman[150540]: 2026-01-29 17:00:00.947655717 +0000 UTC m=+0.130980109 container init fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:00:00 np0005601226 podman[150540]: 2026-01-29 17:00:00.952872284 +0000 UTC m=+0.136196656 container start fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:00:00 np0005601226 podman[150540]: 2026-01-29 17:00:00.95713726 +0000 UTC m=+0.140461662 container attach fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 12:00:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:00:01 np0005601226 python3.9[150637]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:00:01 np0005601226 vigorous_gauss[150580]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:00:01 np0005601226 vigorous_gauss[150580]: --> All data devices are unavailable
Jan 29 12:00:01 np0005601226 systemd[1]: libpod-fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c.scope: Deactivated successfully.
Jan 29 12:00:01 np0005601226 podman[150540]: 2026-01-29 17:00:01.34508498 +0000 UTC m=+0.528409382 container died fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:00:01 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8ad46c56bd093dfad071ed4ff28af787aecb0492718a5ff6d47b7c29cb0b8478-merged.mount: Deactivated successfully.
Jan 29 12:00:01 np0005601226 podman[150540]: 2026-01-29 17:00:01.421561161 +0000 UTC m=+0.604885533 container remove fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_gauss, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:00:01 np0005601226 systemd[1]: libpod-conmon-fcec56ebf52d58908bfb2969d31c86d87a3b027379fa63062f31c32f0def583c.scope: Deactivated successfully.
Jan 29 12:00:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:01 np0005601226 python3.9[150868]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:01 np0005601226 podman[150882]: 2026-01-29 17:00:01.795933486 +0000 UTC m=+0.063333166 container create 649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ellis, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 12:00:01 np0005601226 systemd[1]: Started libpod-conmon-649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e.scope.
Jan 29 12:00:01 np0005601226 podman[150882]: 2026-01-29 17:00:01.751922645 +0000 UTC m=+0.019322315 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:00:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:00:01 np0005601226 podman[150882]: 2026-01-29 17:00:01.871851844 +0000 UTC m=+0.139251504 container init 649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:00:01 np0005601226 ovn_controller[145556]: 2026-01-29T17:00:01Z|00025|memory|INFO|16000 kB peak resident set size after 30.1 seconds
Jan 29 12:00:01 np0005601226 ovn_controller[145556]: 2026-01-29T17:00:01Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 29 12:00:01 np0005601226 podman[150882]: 2026-01-29 17:00:01.878039293 +0000 UTC m=+0.145438963 container start 649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ellis, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:00:01 np0005601226 charming_ellis[150917]: 167 167
Jan 29 12:00:01 np0005601226 systemd[1]: libpod-649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e.scope: Deactivated successfully.
Jan 29 12:00:01 np0005601226 podman[150882]: 2026-01-29 17:00:01.888503019 +0000 UTC m=+0.155902679 container attach 649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ellis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 29 12:00:01 np0005601226 podman[150882]: 2026-01-29 17:00:01.890079364 +0000 UTC m=+0.157479034 container died 649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ellis, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:00:02 np0005601226 python3.9[151014]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:00:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-da01bc4c6fcac17cb289b58eaac5cac01d0abdc3a0d6ba35c3c137dea00af0a5-merged.mount: Deactivated successfully.
Jan 29 12:00:02 np0005601226 podman[150882]: 2026-01-29 17:00:02.340240555 +0000 UTC m=+0.607640205 container remove 649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 12:00:02 np0005601226 podman[150897]: 2026-01-29 17:00:02.346873723 +0000 UTC m=+0.512538444 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 29 12:00:02 np0005601226 systemd[1]: libpod-conmon-649f50c3b429b96f00ab1f6e7a61ac9a230ab93170d787276d3fae86a05f0a6e.scope: Deactivated successfully.
Jan 29 12:00:02 np0005601226 podman[151099]: 2026-01-29 17:00:02.468318277 +0000 UTC m=+0.042984499 container create ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:00:02 np0005601226 systemd[1]: Started libpod-conmon-ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf.scope.
Jan 29 12:00:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:00:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306fb6a083428fe3eceb0f0a4fbc2aafa8df75e14373ad234db73dc7990b1ea1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306fb6a083428fe3eceb0f0a4fbc2aafa8df75e14373ad234db73dc7990b1ea1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306fb6a083428fe3eceb0f0a4fbc2aafa8df75e14373ad234db73dc7990b1ea1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/306fb6a083428fe3eceb0f0a4fbc2aafa8df75e14373ad234db73dc7990b1ea1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:02 np0005601226 podman[151099]: 2026-01-29 17:00:02.536466951 +0000 UTC m=+0.111133183 container init ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:00:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:02 np0005601226 podman[151099]: 2026-01-29 17:00:02.542164759 +0000 UTC m=+0.116830991 container start ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 12:00:02 np0005601226 podman[151099]: 2026-01-29 17:00:02.547375255 +0000 UTC m=+0.122041517 container attach ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:00:02 np0005601226 podman[151099]: 2026-01-29 17:00:02.453028283 +0000 UTC m=+0.027694535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:00:02 np0005601226 python3.9[151195]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]: {
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:    "0": [
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:        {
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "devices": [
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "/dev/loop3"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            ],
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_name": "ceph_lv0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_size": "21470642176",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "name": "ceph_lv0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "tags": {
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cluster_name": "ceph",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.crush_device_class": "",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.encrypted": "0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.objectstore": "bluestore",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osd_id": "0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.type": "block",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.vdo": "0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.with_tpm": "0"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            },
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "type": "block",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "vg_name": "ceph_vg0"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:        }
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:    ],
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:    "1": [
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:        {
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "devices": [
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "/dev/loop4"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            ],
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_name": "ceph_lv1",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_size": "21470642176",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "name": "ceph_lv1",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "tags": {
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cluster_name": "ceph",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.crush_device_class": "",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.encrypted": "0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.objectstore": "bluestore",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osd_id": "1",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.type": "block",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.vdo": "0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.with_tpm": "0"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            },
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "type": "block",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "vg_name": "ceph_vg1"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:        }
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:    ],
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:    "2": [
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:        {
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "devices": [
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "/dev/loop5"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            ],
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_name": "ceph_lv2",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_size": "21470642176",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "name": "ceph_lv2",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "tags": {
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.cluster_name": "ceph",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.crush_device_class": "",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.encrypted": "0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.objectstore": "bluestore",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osd_id": "2",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.type": "block",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.vdo": "0",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:                "ceph.with_tpm": "0"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            },
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "type": "block",
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:            "vg_name": "ceph_vg2"
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:        }
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]:    ]
Jan 29 12:00:02 np0005601226 wizardly_lewin[151139]: }
Jan 29 12:00:02 np0005601226 systemd[1]: libpod-ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf.scope: Deactivated successfully.
Jan 29 12:00:02 np0005601226 podman[151099]: 2026-01-29 17:00:02.814150999 +0000 UTC m=+0.388817231 container died ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 12:00:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-306fb6a083428fe3eceb0f0a4fbc2aafa8df75e14373ad234db73dc7990b1ea1-merged.mount: Deactivated successfully.
Jan 29 12:00:02 np0005601226 podman[151099]: 2026-01-29 17:00:02.862174269 +0000 UTC m=+0.436840501 container remove ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:00:02 np0005601226 systemd[1]: libpod-conmon-ca56badc83442968d8a578c43ee99b14f158bdcfa80e9404fc26f87413c01bbf.scope: Deactivated successfully.
Jan 29 12:00:03 np0005601226 podman[151429]: 2026-01-29 17:00:03.288645807 +0000 UTC m=+0.040214156 container create 70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 12:00:03 np0005601226 systemd[1]: Started libpod-conmon-70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48.scope.
Jan 29 12:00:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:00:03 np0005601226 podman[151429]: 2026-01-29 17:00:03.270975759 +0000 UTC m=+0.022544138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:00:03 np0005601226 podman[151429]: 2026-01-29 17:00:03.368966814 +0000 UTC m=+0.120535163 container init 70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 12:00:03 np0005601226 podman[151429]: 2026-01-29 17:00:03.374778575 +0000 UTC m=+0.126346924 container start 70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:00:03 np0005601226 cranky_shockley[151446]: 167 167
Jan 29 12:00:03 np0005601226 systemd[1]: libpod-70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48.scope: Deactivated successfully.
Jan 29 12:00:03 np0005601226 python3.9[151417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:03 np0005601226 podman[151429]: 2026-01-29 17:00:03.392018463 +0000 UTC m=+0.143586812 container attach 70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:00:03 np0005601226 podman[151429]: 2026-01-29 17:00:03.392474703 +0000 UTC m=+0.144043072 container died 70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 12:00:03 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2ad40afd87ddb465c35a016e9e9f73ec5f0e240bb75d8721590f6abaf3c5a57b-merged.mount: Deactivated successfully.
Jan 29 12:00:03 np0005601226 podman[151429]: 2026-01-29 17:00:03.464695319 +0000 UTC m=+0.216263668 container remove 70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_shockley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:00:03 np0005601226 systemd[1]: libpod-conmon-70ff0f6c93bf3d8a577663f7593c755c7b28b203c1cc83ae808498a4d667ea48.scope: Deactivated successfully.
Jan 29 12:00:03 np0005601226 podman[151518]: 2026-01-29 17:00:03.586685604 +0000 UTC m=+0.041194049 container create 7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 29 12:00:03 np0005601226 systemd[1]: Started libpod-conmon-7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc.scope.
Jan 29 12:00:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:00:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f1f4ef0c2b77a158f13a418ddc2b79cee8c229ac1a6801a911ef5d19c5be1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f1f4ef0c2b77a158f13a418ddc2b79cee8c229ac1a6801a911ef5d19c5be1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f1f4ef0c2b77a158f13a418ddc2b79cee8c229ac1a6801a911ef5d19c5be1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f1f4ef0c2b77a158f13a418ddc2b79cee8c229ac1a6801a911ef5d19c5be1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:03 np0005601226 podman[151518]: 2026-01-29 17:00:03.568508055 +0000 UTC m=+0.023016520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:00:03 np0005601226 podman[151518]: 2026-01-29 17:00:03.670761666 +0000 UTC m=+0.125270131 container init 7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:00:03 np0005601226 podman[151518]: 2026-01-29 17:00:03.678161732 +0000 UTC m=+0.132670177 container start 7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:00:03 np0005601226 podman[151518]: 2026-01-29 17:00:03.690404638 +0000 UTC m=+0.144913113 container attach 7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:00:03 np0005601226 python3.9[151561]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:04 np0005601226 lvm[151795]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:00:04 np0005601226 lvm[151792]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:00:04 np0005601226 lvm[151792]: VG ceph_vg0 finished
Jan 29 12:00:04 np0005601226 lvm[151795]: VG ceph_vg1 finished
Jan 29 12:00:04 np0005601226 lvm[151797]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:00:04 np0005601226 lvm[151797]: VG ceph_vg2 finished
Jan 29 12:00:04 np0005601226 python3.9[151786]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:04 np0005601226 gifted_clarke[151564]: {}
Jan 29 12:00:04 np0005601226 systemd[1]: libpod-7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc.scope: Deactivated successfully.
Jan 29 12:00:04 np0005601226 systemd[1]: libpod-7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc.scope: Consumed 1.084s CPU time.
Jan 29 12:00:04 np0005601226 podman[151518]: 2026-01-29 17:00:04.438464981 +0000 UTC m=+0.892973446 container died 7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:00:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-5d7f1f4ef0c2b77a158f13a418ddc2b79cee8c229ac1a6801a911ef5d19c5be1-merged.mount: Deactivated successfully.
Jan 29 12:00:04 np0005601226 podman[151518]: 2026-01-29 17:00:04.496668692 +0000 UTC m=+0.951177157 container remove 7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_clarke, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:00:04 np0005601226 systemd[1]: libpod-conmon-7b3d3a0bba0fe42f4a577e317c8abff06142d7847b851e4f77c745ff8d255bcc.scope: Deactivated successfully.
Jan 29 12:00:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:00:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:00:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:04 np0005601226 python3.9[151916]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:05 np0005601226 python3.9[152068]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:05 np0005601226 systemd[1]: Reloading.
Jan 29 12:00:05 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:00:05 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:00:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:00:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:06 np0005601226 python3.9[152259]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:07 np0005601226 python3.9[152337]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:07 np0005601226 python3.9[152489]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:08 np0005601226 python3.9[152567]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:09 np0005601226 python3.9[152719]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:09 np0005601226 systemd[1]: Reloading.
Jan 29 12:00:09 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:00:09 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:00:09 np0005601226 systemd[1]: Starting Create netns directory...
Jan 29 12:00:09 np0005601226 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 29 12:00:09 np0005601226 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 29 12:00:09 np0005601226 systemd[1]: Finished Create netns directory.
Jan 29 12:00:09 np0005601226 python3.9[152913]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:00:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:00:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:00:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:00:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:00:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:00:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:00:10 np0005601226 python3.9[153065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:11 np0005601226 python3.9[153188]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706010.1309545-328-3191557380466/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:00:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:11 np0005601226 python3.9[153340]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:12 np0005601226 python3.9[153492]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:00:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:12 np0005601226 python3.9[153644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:13 np0005601226 python3.9[153767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706012.5975268-361-124989697303940/.source.json _original_basename=.u6ujatnj follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:14 np0005601226 python3.9[153917]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:16 np0005601226 python3.9[154340]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 29 12:00:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:17 np0005601226 python3.9[154492]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 29 12:00:18 np0005601226 python3[154644]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 29 12:00:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.033756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706023033792, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1026, "num_deletes": 251, "total_data_size": 1575020, "memory_usage": 1600896, "flush_reason": "Manual Compaction"}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706023044558, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1531773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8851, "largest_seqno": 9876, "table_properties": {"data_size": 1526792, "index_size": 2505, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10328, "raw_average_key_size": 18, "raw_value_size": 1516823, "raw_average_value_size": 2767, "num_data_blocks": 117, "num_entries": 548, "num_filter_entries": 548, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705921, "oldest_key_time": 1769705921, "file_creation_time": 1769706023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10842 microseconds, and 2339 cpu microseconds.
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.044598) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1531773 bytes OK
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.044612) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.046120) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.046131) EVENT_LOG_v1 {"time_micros": 1769706023046128, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.046144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1570179, prev total WAL file size 1570179, number of live WAL files 2.
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.046442) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1495KB)], [23(7307KB)]
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706023046470, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 9014482, "oldest_snapshot_seqno": -1}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3402 keys, 7084828 bytes, temperature: kUnknown
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706023100634, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7084828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7059213, "index_size": 16034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 82699, "raw_average_key_size": 24, "raw_value_size": 6994804, "raw_average_value_size": 2056, "num_data_blocks": 697, "num_entries": 3402, "num_filter_entries": 3402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769706023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.100820) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7084828 bytes
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.103588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.2 rd, 130.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.1 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(10.5) write-amplify(4.6) OK, records in: 3916, records dropped: 514 output_compression: NoCompression
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.103604) EVENT_LOG_v1 {"time_micros": 1769706023103596, "job": 8, "event": "compaction_finished", "compaction_time_micros": 54224, "compaction_time_cpu_micros": 10712, "output_level": 6, "num_output_files": 1, "total_output_size": 7084828, "num_input_records": 3916, "num_output_records": 3402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706023103765, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706023104319, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.046406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.104341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.104345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.104347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.104349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:00:23 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:00:23.104351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:00:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:00:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5600 writes, 24K keys, 5600 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5600 writes, 815 syncs, 6.87 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5600 writes, 24K keys, 5600 commit groups, 1.0 writes per commit group, ingest: 18.92 MB, 0.03 MB/s#012Interval WAL: 5600 writes, 815 syncs, 6.87 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1d22818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1d22818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 29 12:00:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:00:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 6793 writes, 29K keys, 6793 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6793 writes, 1173 syncs, 5.79 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6793 writes, 29K keys, 6793 commit groups, 1.0 writes per commit group, ingest: 20.02 MB, 0.03 MB/s#012Interval WAL: 6793 writes, 1173 syncs, 5.79 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f5108558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f5108558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 29 12:00:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:32 np0005601226 podman[154656]: 2026-01-29 17:00:32.864422184 +0000 UTC m=+14.624844990 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:00:33 np0005601226 podman[154796]: 2026-01-29 17:00:32.950161676 +0000 UTC m=+0.020504276 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:00:33 np0005601226 podman[154755]: 2026-01-29 17:00:33.265073667 +0000 UTC m=+0.426251169 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:00:33 np0005601226 podman[154796]: 2026-01-29 17:00:33.953411182 +0000 UTC m=+1.023753792 container create 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 29 12:00:33 np0005601226 python3[154644]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:00:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:34 np0005601226 python3.9[154996]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:00:35 np0005601226 python3.9[155150]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:35 np0005601226 python3.9[155226]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:00:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:00:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.3 total, 600.0 interval#012Cumulative writes: 5497 writes, 24K keys, 5497 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5497 writes, 738 syncs, 7.45 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5497 writes, 24K keys, 5497 commit groups, 1.0 writes per commit group, ingest: 18.72 MB, 0.03 MB/s#012Interval WAL: 5497 writes, 738 syncs, 7.45 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a688dad8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a688dad8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 29 12:00:36 np0005601226 python3.9[155377]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769706035.8596578-439-247564616806170/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:36 np0005601226 python3.9[155453]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 12:00:36 np0005601226 systemd[1]: Reloading.
Jan 29 12:00:37 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:00:37 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:00:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:37 np0005601226 python3.9[155564]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:37 np0005601226 systemd[1]: Reloading.
Jan 29 12:00:37 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:00:37 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:00:38 np0005601226 systemd[1]: Starting ovn_metadata_agent container...
Jan 29 12:00:38 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:00:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d71f3da77f337fbf72f3f875e75ed38e48d99c03e4500b6a21ca35111dc60aa/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d71f3da77f337fbf72f3f875e75ed38e48d99c03e4500b6a21ca35111dc60aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:00:38 np0005601226 systemd[1]: Started /usr/bin/podman healthcheck run 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205.
Jan 29 12:00:38 np0005601226 podman[155604]: 2026-01-29 17:00:38.184999418 +0000 UTC m=+0.122333389 container init 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + sudo -E kolla_set_configs
Jan 29 12:00:38 np0005601226 podman[155604]: 2026-01-29 17:00:38.212538897 +0000 UTC m=+0.149872888 container start 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:00:38 np0005601226 edpm-start-podman-container[155604]: ovn_metadata_agent
Jan 29 12:00:38 np0005601226 edpm-start-podman-container[155603]: Creating additional drop-in dependency for "ovn_metadata_agent" (54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205)
Jan 29 12:00:38 np0005601226 systemd[1]: Reloading.
Jan 29 12:00:38 np0005601226 podman[155627]: 2026-01-29 17:00:38.293569969 +0000 UTC m=+0.074296397 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Validating config file
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Copying service configuration files
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Writing out command to execute
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: ++ cat /run_command
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + CMD=neutron-ovn-metadata-agent
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + ARGS=
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + sudo kolla_copy_cacerts
Jan 29 12:00:38 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:00:38 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + [[ ! -n '' ]]
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + . kolla_extend_start
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: Running command: 'neutron-ovn-metadata-agent'
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + umask 0022
Jan 29 12:00:38 np0005601226 ovn_metadata_agent[155620]: + exec neutron-ovn-metadata-agent
Jan 29 12:00:38 np0005601226 systemd[1]: Started ovn_metadata_agent container.
Jan 29 12:00:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:39 np0005601226 python3.9[155857]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 29 12:00:39 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] Check health
Jan 29 12:00:40 np0005601226 python3.9[156009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.230 155625 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.230 155625 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.230 155625 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.230 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.230 155625 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.231 155625 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.232 155625 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.233 155625 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.233 155625 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.233 155625 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.233 155625 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.233 155625 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.233 155625 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.233 155625 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.234 155625 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.235 155625 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.236 155625 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.237 155625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.238 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.239 155625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.240 155625 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.241 155625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.242 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.243 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.244 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.245 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.246 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.247 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.248 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.249 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.250 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.251 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.252 155625 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.253 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.254 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.255 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.256 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.257 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.258 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.259 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.260 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.261 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.262 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.262 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.262 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.262 155625 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.262 155625 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.270 155625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.270 155625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.270 155625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.271 155625 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.271 155625 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.282 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ea6bcc65-2563-4fe6-9039-bca7261f4cf7 (UUID: ea6bcc65-2563-4fe6-9039-bca7261f4cf7) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.307 155625 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.307 155625 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.307 155625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.307 155625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.310 155625 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.316 155625 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.322 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ea6bcc65-2563-4fe6-9039-bca7261f4cf7'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], external_ids={}, name=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, nb_cfg_timestamp=1769705979820, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.322 155625 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f395517cc10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.323 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.323 155625 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.323 155625 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.324 155625 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.328 155625 DEBUG oslo_service.service [-] Started child 156058 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.331 155625 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp80itjftn/privsep.sock']#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.331 156058 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-167913'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.353 156058 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.354 156058 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.354 156058 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.357 156058 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.363 156058 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.369 156058 INFO eventlet.wsgi.server [-] (156058) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:00:40
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta']
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:00:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:00:40 np0005601226 python3.9[156138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706039.79171-484-159449958817928/.source.yaml _original_basename=.z4m_5ui7 follow=False checksum=44ff46e3c7968c4d3fb12a9e2a89f37ba1a86d5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:00:40 np0005601226 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.921 155625 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.921 155625 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp80itjftn/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.817 156164 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.825 156164 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.827 156164 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.827 156164 INFO oslo.privsep.daemon [-] privsep daemon running as pid 156164#033[00m
Jan 29 12:00:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:40.923 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[205b7fae-f2a3-495e-800e-a1d56022b207]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:00:41 np0005601226 systemd-logind[823]: Session 48 logged out. Waiting for processes to exit.
Jan 29 12:00:41 np0005601226 systemd[1]: session-48.scope: Deactivated successfully.
Jan 29 12:00:41 np0005601226 systemd[1]: session-48.scope: Consumed 45.960s CPU time.
Jan 29 12:00:41 np0005601226 systemd-logind[823]: Removed session 48.
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.369 156164 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.369 156164 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.369 156164 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.864 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[a95da45d-c269-4217-8855-70f9d099e7e7]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.866 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, column=external_ids, values=({'neutron:ovn-metadata-id': '7d5642db-55b9-57cd-9d43-a5dcd73448ce'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.875 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.881 155625 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.881 155625 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.881 155625 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.881 155625 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.881 155625 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.882 155625 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.882 155625 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.882 155625 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.882 155625 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.882 155625 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.882 155625 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.882 155625 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.883 155625 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.884 155625 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.884 155625 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.884 155625 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.884 155625 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.884 155625 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.884 155625 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.884 155625 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.885 155625 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.886 155625 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.886 155625 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.886 155625 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.886 155625 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.886 155625 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.886 155625 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.886 155625 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.887 155625 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.888 155625 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.889 155625 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.890 155625 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.891 155625 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.892 155625 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.893 155625 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.894 155625 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.895 155625 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.896 155625 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.896 155625 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.896 155625 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.896 155625 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.896 155625 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.896 155625 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.896 155625 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.897 155625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.898 155625 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.899 155625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.900 155625 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.901 155625 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.902 155625 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.903 155625 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.904 155625 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.905 155625 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.906 155625 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.907 155625 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.908 155625 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.909 155625 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.910 155625 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.911 155625 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.912 155625 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.913 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.914 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.915 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.916 155625 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:00:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:00:41.917 155625 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 29 12:00:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:46 np0005601226 systemd-logind[823]: New session 49 of user zuul.
Jan 29 12:00:46 np0005601226 systemd[1]: Started Session 49 of User zuul.
Jan 29 12:00:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:47 np0005601226 python3.9[156322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 12:00:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:48 np0005601226 python3.9[156478]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:00:49 np0005601226 python3.9[156640]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 12:00:49 np0005601226 systemd[1]: Reloading.
Jan 29 12:00:49 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:00:49 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:00:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:50 np0005601226 python3.9[156824]: ansible-ansible.builtin.service_facts Invoked
Jan 29 12:00:50 np0005601226 network[156841]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 12:00:50 np0005601226 network[156842]: 'network-scripts' will be removed from distribution in near future.
Jan 29 12:00:50 np0005601226 network[156843]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:00:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:00:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:53 np0005601226 python3.9[157105]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:54 np0005601226 python3.9[157258]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:54 np0005601226 python3.9[157411]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:56 np0005601226 python3.9[157565]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:56 np0005601226 python3.9[157718]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:57 np0005601226 python3.9[157871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:00:58 np0005601226 python3.9[158024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:00:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:00:59 np0005601226 python3.9[158177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:00 np0005601226 python3.9[158329]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:00 np0005601226 python3.9[158481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:01 np0005601226 python3.9[158633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:02 np0005601226 python3.9[158800]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:02 np0005601226 python3.9[158952]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:03 np0005601226 python3.9[159104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:03 np0005601226 podman[159228]: 2026-01-29 17:01:03.636823314 +0000 UTC m=+0.099189903 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 12:01:03 np0005601226 python3.9[159269]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:04 np0005601226 python3.9[159434]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:04 np0005601226 python3.9[159634]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:01:05 np0005601226 python3.9[159850]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:05 np0005601226 podman[159894]: 2026-01-29 17:01:05.451437524 +0000 UTC m=+0.040306081 container create ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:01:05 np0005601226 systemd[1]: Started libpod-conmon-ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766.scope.
Jan 29 12:01:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:01:05 np0005601226 podman[159894]: 2026-01-29 17:01:05.515846937 +0000 UTC m=+0.104715514 container init ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 12:01:05 np0005601226 podman[159894]: 2026-01-29 17:01:05.521921195 +0000 UTC m=+0.110789742 container start ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 12:01:05 np0005601226 podman[159894]: 2026-01-29 17:01:05.525486273 +0000 UTC m=+0.114354820 container attach ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 12:01:05 np0005601226 quizzical_nash[159953]: 167 167
Jan 29 12:01:05 np0005601226 systemd[1]: libpod-ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766.scope: Deactivated successfully.
Jan 29 12:01:05 np0005601226 conmon[159953]: conmon ff9912dec9f2fdb4ae2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766.scope/container/memory.events
Jan 29 12:01:05 np0005601226 podman[159894]: 2026-01-29 17:01:05.526954043 +0000 UTC m=+0.115822610 container died ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 29 12:01:05 np0005601226 podman[159894]: 2026-01-29 17:01:05.432131053 +0000 UTC m=+0.020999640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:01:05 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9820c808bda037143599cf867656fcee864d53809f8bc4c92e8ed33dedf85c7e-merged.mount: Deactivated successfully.
Jan 29 12:01:05 np0005601226 podman[159894]: 2026-01-29 17:01:05.571429658 +0000 UTC m=+0.160298215 container remove ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_nash, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 12:01:05 np0005601226 systemd[1]: libpod-conmon-ff9912dec9f2fdb4ae2cf9a67fc7b4ba6255ccb00c01fd2a3a9f77e36c9d1766.scope: Deactivated successfully.
Jan 29 12:01:05 np0005601226 podman[160055]: 2026-01-29 17:01:05.67971004 +0000 UTC m=+0.035906879 container create a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:01:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:01:05 np0005601226 systemd[1]: Started libpod-conmon-a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350.scope.
Jan 29 12:01:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:01:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e7f9e6255560b63b8f53c6a15bc7a28d2c84aa5b60af145e42182534a607f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e7f9e6255560b63b8f53c6a15bc7a28d2c84aa5b60af145e42182534a607f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e7f9e6255560b63b8f53c6a15bc7a28d2c84aa5b60af145e42182534a607f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e7f9e6255560b63b8f53c6a15bc7a28d2c84aa5b60af145e42182534a607f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e7f9e6255560b63b8f53c6a15bc7a28d2c84aa5b60af145e42182534a607f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:05 np0005601226 podman[160055]: 2026-01-29 17:01:05.663266008 +0000 UTC m=+0.019462857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:01:05 np0005601226 podman[160055]: 2026-01-29 17:01:05.760482994 +0000 UTC m=+0.116679853 container init a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:01:05 np0005601226 podman[160055]: 2026-01-29 17:01:05.765336747 +0000 UTC m=+0.121533586 container start a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sinoussi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:01:05 np0005601226 podman[160055]: 2026-01-29 17:01:05.768004421 +0000 UTC m=+0.124201260 container attach a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sinoussi, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 12:01:05 np0005601226 python3.9[160086]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:06 np0005601226 happy_sinoussi[160089]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:01:06 np0005601226 happy_sinoussi[160089]: --> All data devices are unavailable
Jan 29 12:01:06 np0005601226 systemd[1]: libpod-a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350.scope: Deactivated successfully.
Jan 29 12:01:06 np0005601226 podman[160055]: 2026-01-29 17:01:06.169379384 +0000 UTC m=+0.525576233 container died a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sinoussi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:01:06 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b0e7f9e6255560b63b8f53c6a15bc7a28d2c84aa5b60af145e42182534a607f8-merged.mount: Deactivated successfully.
Jan 29 12:01:06 np0005601226 podman[160055]: 2026-01-29 17:01:06.20555027 +0000 UTC m=+0.561747109 container remove a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_sinoussi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 12:01:06 np0005601226 systemd[1]: libpod-conmon-a9f432ec72b34b749ed65390be2a5c0016151f59a54c0532148c2877c3677350.scope: Deactivated successfully.
Jan 29 12:01:06 np0005601226 python3.9[160274]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:06 np0005601226 podman[160383]: 2026-01-29 17:01:06.539967549 +0000 UTC m=+0.032887017 container create 0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hopper, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:01:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:06 np0005601226 systemd[1]: Started libpod-conmon-0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0.scope.
Jan 29 12:01:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:01:06 np0005601226 podman[160383]: 2026-01-29 17:01:06.526158809 +0000 UTC m=+0.019078277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:01:06 np0005601226 podman[160383]: 2026-01-29 17:01:06.67400146 +0000 UTC m=+0.166920928 container init 0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:01:06 np0005601226 podman[160383]: 2026-01-29 17:01:06.682604927 +0000 UTC m=+0.175524395 container start 0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hopper, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 12:01:06 np0005601226 exciting_hopper[160448]: 167 167
Jan 29 12:01:06 np0005601226 systemd[1]: libpod-0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0.scope: Deactivated successfully.
Jan 29 12:01:06 np0005601226 podman[160383]: 2026-01-29 17:01:06.759832764 +0000 UTC m=+0.252752282 container attach 0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:01:06 np0005601226 podman[160383]: 2026-01-29 17:01:06.762231809 +0000 UTC m=+0.255151307 container died 0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:01:06 np0005601226 python3.9[160509]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:01:06 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c91207fb80a6518b237c469cb1235a8ca16786f395859462992b2ee107456e21-merged.mount: Deactivated successfully.
Jan 29 12:01:07 np0005601226 podman[160383]: 2026-01-29 17:01:07.004764569 +0000 UTC m=+0.497684037 container remove 0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:01:07 np0005601226 systemd[1]: libpod-conmon-0c33698d2ccdc2781ae91389543c134b9319b1f3c2daa91dc5a893845a3625c0.scope: Deactivated successfully.
Jan 29 12:01:07 np0005601226 podman[160552]: 2026-01-29 17:01:07.100252828 +0000 UTC m=+0.019793826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:01:07 np0005601226 podman[160552]: 2026-01-29 17:01:07.230569597 +0000 UTC m=+0.150110565 container create f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 12:01:07 np0005601226 systemd[1]: Started libpod-conmon-f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6.scope.
Jan 29 12:01:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:01:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b382094f8093dcec26e03a1c7007bbf7521abf93726ca9144e75e58c2e5fd2d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b382094f8093dcec26e03a1c7007bbf7521abf93726ca9144e75e58c2e5fd2d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b382094f8093dcec26e03a1c7007bbf7521abf93726ca9144e75e58c2e5fd2d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b382094f8093dcec26e03a1c7007bbf7521abf93726ca9144e75e58c2e5fd2d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:07 np0005601226 podman[160552]: 2026-01-29 17:01:07.549464068 +0000 UTC m=+0.469005136 container init f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:01:07 np0005601226 python3.9[160699]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:07 np0005601226 podman[160552]: 2026-01-29 17:01:07.558177068 +0000 UTC m=+0.477718036 container start f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:01:07 np0005601226 podman[160552]: 2026-01-29 17:01:07.601138941 +0000 UTC m=+0.520679939 container attach f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]: {
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:    "0": [
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:        {
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "devices": [
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "/dev/loop3"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            ],
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_name": "ceph_lv0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_size": "21470642176",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "name": "ceph_lv0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "tags": {
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cluster_name": "ceph",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.crush_device_class": "",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.encrypted": "0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.objectstore": "bluestore",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osd_id": "0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.type": "block",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.vdo": "0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.with_tpm": "0"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            },
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "type": "block",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "vg_name": "ceph_vg0"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:        }
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:    ],
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:    "1": [
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:        {
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "devices": [
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "/dev/loop4"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            ],
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_name": "ceph_lv1",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_size": "21470642176",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "name": "ceph_lv1",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "tags": {
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cluster_name": "ceph",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.crush_device_class": "",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.encrypted": "0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.objectstore": "bluestore",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osd_id": "1",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.type": "block",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.vdo": "0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.with_tpm": "0"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            },
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "type": "block",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "vg_name": "ceph_vg1"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:        }
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:    ],
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:    "2": [
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:        {
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "devices": [
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "/dev/loop5"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            ],
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_name": "ceph_lv2",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_size": "21470642176",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "name": "ceph_lv2",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "tags": {
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.cluster_name": "ceph",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.crush_device_class": "",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.encrypted": "0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.objectstore": "bluestore",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osd_id": "2",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.type": "block",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.vdo": "0",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:                "ceph.with_tpm": "0"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            },
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "type": "block",
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:            "vg_name": "ceph_vg2"
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:        }
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]:    ]
Jan 29 12:01:07 np0005601226 zen_roentgen[160694]: }
Jan 29 12:01:07 np0005601226 systemd[1]: libpod-f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6.scope: Deactivated successfully.
Jan 29 12:01:07 np0005601226 podman[160552]: 2026-01-29 17:01:07.863821404 +0000 UTC m=+0.783362392 container died f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 12:01:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b382094f8093dcec26e03a1c7007bbf7521abf93726ca9144e75e58c2e5fd2d2-merged.mount: Deactivated successfully.
Jan 29 12:01:07 np0005601226 podman[160552]: 2026-01-29 17:01:07.955175671 +0000 UTC m=+0.874716639 container remove f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_roentgen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 12:01:07 np0005601226 systemd[1]: libpod-conmon-f2b69a783d2cde726c9ea0edef558219a8cdcca17c369ad38e3588b6d4eb4eb6.scope: Deactivated successfully.
Jan 29 12:01:08 np0005601226 python3.9[160921]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 12:01:08 np0005601226 podman[160934]: 2026-01-29 17:01:08.301869137 +0000 UTC m=+0.019016284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:01:08 np0005601226 podman[160934]: 2026-01-29 17:01:08.471111798 +0000 UTC m=+0.188258935 container create ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goodall, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:01:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:08 np0005601226 systemd[1]: Started libpod-conmon-ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85.scope.
Jan 29 12:01:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:01:08 np0005601226 podman[160934]: 2026-01-29 17:01:08.7006799 +0000 UTC m=+0.417827047 container init ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goodall, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:01:08 np0005601226 podman[160934]: 2026-01-29 17:01:08.705854202 +0000 UTC m=+0.423001339 container start ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:01:08 np0005601226 modest_goodall[161049]: 167 167
Jan 29 12:01:08 np0005601226 systemd[1]: libpod-ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85.scope: Deactivated successfully.
Jan 29 12:01:08 np0005601226 podman[160934]: 2026-01-29 17:01:08.795049538 +0000 UTC m=+0.512196675 container attach ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goodall, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:01:08 np0005601226 podman[160934]: 2026-01-29 17:01:08.795441939 +0000 UTC m=+0.512589076 container died ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:01:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b73a8fc5152e9832c9924c8251cdfea4dc5ac9832781ba5f8b1d1f4d188f6ee1-merged.mount: Deactivated successfully.
Jan 29 12:01:08 np0005601226 podman[160934]: 2026-01-29 17:01:08.935925348 +0000 UTC m=+0.653072485 container remove ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_goodall, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:01:08 np0005601226 systemd[1]: libpod-conmon-ac36bbbba00b4f2dec6d4d325a9f320ffe22782d89a56f9d1660e3b67a9c6c85.scope: Deactivated successfully.
Jan 29 12:01:09 np0005601226 podman[161051]: 2026-01-29 17:01:09.028861136 +0000 UTC m=+0.386310509 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:01:09 np0005601226 podman[161138]: 2026-01-29 17:01:09.067028848 +0000 UTC m=+0.049938446 container create 35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lovelace, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 12:01:09 np0005601226 python3.9[161127]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 12:01:09 np0005601226 systemd[1]: Started libpod-conmon-35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e.scope.
Jan 29 12:01:09 np0005601226 systemd[1]: Reloading.
Jan 29 12:01:09 np0005601226 podman[161138]: 2026-01-29 17:01:09.036930169 +0000 UTC m=+0.019839787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:01:09 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:01:09 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:01:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:01:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f30c7b2cb7bc23a394619cc226e3b205163f94f548e0f64150a45cef221ecd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f30c7b2cb7bc23a394619cc226e3b205163f94f548e0f64150a45cef221ecd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f30c7b2cb7bc23a394619cc226e3b205163f94f548e0f64150a45cef221ecd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f30c7b2cb7bc23a394619cc226e3b205163f94f548e0f64150a45cef221ecd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:01:09 np0005601226 podman[161138]: 2026-01-29 17:01:09.412263375 +0000 UTC m=+0.395173003 container init 35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:01:09 np0005601226 podman[161138]: 2026-01-29 17:01:09.418799225 +0000 UTC m=+0.401708823 container start 35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:01:09 np0005601226 podman[161138]: 2026-01-29 17:01:09.4218925 +0000 UTC m=+0.404802098 container attach 35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lovelace, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 12:01:09 np0005601226 python3.9[161374]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:09 np0005601226 lvm[161431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:01:09 np0005601226 lvm[161431]: VG ceph_vg0 finished
Jan 29 12:01:09 np0005601226 lvm[161442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:01:09 np0005601226 lvm[161442]: VG ceph_vg1 finished
Jan 29 12:01:10 np0005601226 lvm[161455]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:01:10 np0005601226 lvm[161455]: VG ceph_vg2 finished
Jan 29 12:01:10 np0005601226 lucid_lovelace[161163]: {}
Jan 29 12:01:10 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:01:10 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:01:10 np0005601226 systemd[1]: libpod-35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e.scope: Deactivated successfully.
Jan 29 12:01:10 np0005601226 podman[161138]: 2026-01-29 17:01:10.170258008 +0000 UTC m=+1.153167606 container died 35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:01:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3f30c7b2cb7bc23a394619cc226e3b205163f94f548e0f64150a45cef221ecd2-merged.mount: Deactivated successfully.
Jan 29 12:01:10 np0005601226 podman[161138]: 2026-01-29 17:01:10.213596781 +0000 UTC m=+1.196506379 container remove 35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lovelace, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 12:01:10 np0005601226 systemd[1]: libpod-conmon-35e13ccf0b9f20ec9c8a7ae19606358079fb164c454e01185fee4e6e262dcb2e.scope: Deactivated successfully.
Jan 29 12:01:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:01:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:01:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:01:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:01:10 np0005601226 python3.9[161620]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:01:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:01:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:01:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:01:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:01:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:01:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:11 np0005601226 python3.9[161775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:01:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:01:11 np0005601226 python3.9[161928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:12 np0005601226 python3.9[162081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:13 np0005601226 python3.9[162234]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:13 np0005601226 python3.9[162387]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:01:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:14 np0005601226 python3.9[162540]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 29 12:01:15 np0005601226 python3.9[162693]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 12:01:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:16 np0005601226 python3.9[162851]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 12:01:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:18 np0005601226 python3.9[163011]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 12:01:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:18 np0005601226 python3.9[163095]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 12:01:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Jan 29 12:01:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Jan 29 12:01:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Jan 29 12:01:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:33 np0005601226 podman[163108]: 2026-01-29 17:01:33.920987071 +0000 UTC m=+0.082828667 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:01:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 29 12:01:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 29 12:01:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 29 12:01:39 np0005601226 podman[163287]: 2026-01-29 17:01:39.878348821 +0000 UTC m=+0.054354544 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 29 12:01:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:01:40.263 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:01:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:01:40.264 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:01:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:01:40.264 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:01:40
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'images', '.mgr']
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:01:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:01:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Jan 29 12:01:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Jan 29 12:01:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:01:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:01:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:01:57 np0005601226 kernel: SELinux:  Converting 2777 SID table entries...
Jan 29 12:01:57 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 12:01:57 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 12:01:57 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 12:01:57 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 12:01:57 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 12:01:57 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 12:01:57 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 12:01:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:01:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:04 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 29 12:02:04 np0005601226 podman[163342]: 2026-01-29 17:02:04.915806438 +0000 UTC m=+0.077890301 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 29 12:02:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:06 np0005601226 kernel: SELinux:  Converting 2777 SID table entries...
Jan 29 12:02:06 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 12:02:06 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 12:02:06 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 12:02:06 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 12:02:06 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 12:02:06 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 12:02:06 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 12:02:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:10 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 29 12:02:10 np0005601226 podman[163399]: 2026-01-29 17:02:10.465053207 +0000 UTC m=+0.055493985 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 29 12:02:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:02:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:02:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:02:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:02:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:02:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:02:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:10 np0005601226 podman[163490]: 2026-01-29 17:02:10.804978712 +0000 UTC m=+0.050086009 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 12:02:10 np0005601226 podman[163490]: 2026-01-29 17:02:10.916507684 +0000 UTC m=+0.161614961 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:02:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:02:12 np0005601226 podman[163820]: 2026-01-29 17:02:12.33374214 +0000 UTC m=+0.048835494 container create 2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:02:12 np0005601226 systemd[1]: Started libpod-conmon-2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a.scope.
Jan 29 12:02:12 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:02:12 np0005601226 podman[163820]: 2026-01-29 17:02:12.306521912 +0000 UTC m=+0.021615316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:02:12 np0005601226 podman[163820]: 2026-01-29 17:02:12.403159392 +0000 UTC m=+0.118252836 container init 2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:02:12 np0005601226 podman[163820]: 2026-01-29 17:02:12.411859447 +0000 UTC m=+0.126952841 container start 2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:02:12 np0005601226 podman[163820]: 2026-01-29 17:02:12.416632957 +0000 UTC m=+0.131726391 container attach 2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:02:12 np0005601226 nice_jennings[163837]: 167 167
Jan 29 12:02:12 np0005601226 systemd[1]: libpod-2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a.scope: Deactivated successfully.
Jan 29 12:02:12 np0005601226 conmon[163837]: conmon 2c1682281047caee6e17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a.scope/container/memory.events
Jan 29 12:02:12 np0005601226 podman[163820]: 2026-01-29 17:02:12.419012211 +0000 UTC m=+0.134105625 container died 2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:02:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b5479339715d19f0c1e0b9de47fc6646231fa54fcd94962891324c45dd6a14be-merged.mount: Deactivated successfully.
Jan 29 12:02:12 np0005601226 podman[163820]: 2026-01-29 17:02:12.466704984 +0000 UTC m=+0.181798348 container remove 2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:02:12 np0005601226 systemd[1]: libpod-conmon-2c1682281047caee6e17c3f7c5471a537a023779833ddbfd2f99e116e00d717a.scope: Deactivated successfully.
Jan 29 12:02:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:12 np0005601226 podman[163861]: 2026-01-29 17:02:12.613666517 +0000 UTC m=+0.052082682 container create 9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lovelace, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:02:12 np0005601226 systemd[1]: Started libpod-conmon-9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df.scope.
Jan 29 12:02:12 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:02:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c803b355e2d4e69399d1d3bcd0f2612a9a9b32d8dd279aa743f87f0a1d359e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c803b355e2d4e69399d1d3bcd0f2612a9a9b32d8dd279aa743f87f0a1d359e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c803b355e2d4e69399d1d3bcd0f2612a9a9b32d8dd279aa743f87f0a1d359e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c803b355e2d4e69399d1d3bcd0f2612a9a9b32d8dd279aa743f87f0a1d359e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c803b355e2d4e69399d1d3bcd0f2612a9a9b32d8dd279aa743f87f0a1d359e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:12 np0005601226 podman[163861]: 2026-01-29 17:02:12.584794585 +0000 UTC m=+0.023210800 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:02:12 np0005601226 podman[163861]: 2026-01-29 17:02:12.68645178 +0000 UTC m=+0.124867915 container init 9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:02:12 np0005601226 podman[163861]: 2026-01-29 17:02:12.695051043 +0000 UTC m=+0.133467178 container start 9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lovelace, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:02:12 np0005601226 podman[163861]: 2026-01-29 17:02:12.698780225 +0000 UTC m=+0.137196380 container attach 9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:02:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:13 np0005601226 inspiring_lovelace[163878]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:02:13 np0005601226 inspiring_lovelace[163878]: --> All data devices are unavailable
Jan 29 12:02:13 np0005601226 systemd[1]: libpod-9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df.scope: Deactivated successfully.
Jan 29 12:02:13 np0005601226 podman[163861]: 2026-01-29 17:02:13.147457667 +0000 UTC m=+0.585873812 container died 9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lovelace, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:02:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a8c803b355e2d4e69399d1d3bcd0f2612a9a9b32d8dd279aa743f87f0a1d359e-merged.mount: Deactivated successfully.
Jan 29 12:02:13 np0005601226 podman[163861]: 2026-01-29 17:02:13.255910496 +0000 UTC m=+0.694326671 container remove 9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 12:02:13 np0005601226 systemd[1]: libpod-conmon-9aa1064d4b7cee46a2950de7a4b580a4027f65b39f6405b0e03b27d3116947df.scope: Deactivated successfully.
Jan 29 12:02:13 np0005601226 podman[163973]: 2026-01-29 17:02:13.732968108 +0000 UTC m=+0.049193775 container create 46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:02:13 np0005601226 systemd[1]: Started libpod-conmon-46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84.scope.
Jan 29 12:02:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:02:13 np0005601226 podman[163973]: 2026-01-29 17:02:13.707008984 +0000 UTC m=+0.023234651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:02:13 np0005601226 podman[163973]: 2026-01-29 17:02:13.809705268 +0000 UTC m=+0.125930955 container init 46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cray, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:02:13 np0005601226 podman[163973]: 2026-01-29 17:02:13.81679396 +0000 UTC m=+0.133019627 container start 46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:02:13 np0005601226 podman[163973]: 2026-01-29 17:02:13.821624031 +0000 UTC m=+0.137849718 container attach 46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cray, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 12:02:13 np0005601226 priceless_cray[163989]: 167 167
Jan 29 12:02:13 np0005601226 systemd[1]: libpod-46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84.scope: Deactivated successfully.
Jan 29 12:02:13 np0005601226 podman[163973]: 2026-01-29 17:02:13.824636382 +0000 UTC m=+0.140862089 container died 46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:02:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-78204a5b838d0cfa9426eb7d0dfcbfea553ebfc6a926614aadfe7962b4ed6004-merged.mount: Deactivated successfully.
Jan 29 12:02:13 np0005601226 podman[163973]: 2026-01-29 17:02:13.882622114 +0000 UTC m=+0.198847791 container remove 46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:02:13 np0005601226 systemd[1]: libpod-conmon-46a55a1c0b407c92993ef32f60971bf71f7cb64fbbddd73e01dd6c09cc40ac84.scope: Deactivated successfully.
Jan 29 12:02:14 np0005601226 podman[164013]: 2026-01-29 17:02:14.052292163 +0000 UTC m=+0.048909176 container create c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_liskov, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:02:14 np0005601226 systemd[1]: Started libpod-conmon-c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1.scope.
Jan 29 12:02:14 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:02:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e95fc39c8b4730d0515badc88b9ba8293c9aa9f4ce777c28fd3ede2382ac423/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e95fc39c8b4730d0515badc88b9ba8293c9aa9f4ce777c28fd3ede2382ac423/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e95fc39c8b4730d0515badc88b9ba8293c9aa9f4ce777c28fd3ede2382ac423/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e95fc39c8b4730d0515badc88b9ba8293c9aa9f4ce777c28fd3ede2382ac423/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:14 np0005601226 podman[164013]: 2026-01-29 17:02:14.033456692 +0000 UTC m=+0.030073735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:02:14 np0005601226 podman[164013]: 2026-01-29 17:02:14.155394247 +0000 UTC m=+0.152011330 container init c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:02:14 np0005601226 podman[164013]: 2026-01-29 17:02:14.160863526 +0000 UTC m=+0.157480579 container start c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_liskov, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:02:14 np0005601226 podman[164013]: 2026-01-29 17:02:14.179541953 +0000 UTC m=+0.176159086 container attach c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_liskov, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 12:02:14 np0005601226 sad_liskov[164029]: {
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:    "0": [
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:        {
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "devices": [
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "/dev/loop3"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            ],
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_name": "ceph_lv0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_size": "21470642176",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "name": "ceph_lv0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "tags": {
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cluster_name": "ceph",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.crush_device_class": "",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.encrypted": "0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.objectstore": "bluestore",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osd_id": "0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.type": "block",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.vdo": "0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.with_tpm": "0"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            },
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "type": "block",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "vg_name": "ceph_vg0"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:        }
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:    ],
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:    "1": [
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:        {
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "devices": [
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "/dev/loop4"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            ],
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_name": "ceph_lv1",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_size": "21470642176",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "name": "ceph_lv1",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "tags": {
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cluster_name": "ceph",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.crush_device_class": "",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.encrypted": "0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.objectstore": "bluestore",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osd_id": "1",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.type": "block",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.vdo": "0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.with_tpm": "0"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            },
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "type": "block",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "vg_name": "ceph_vg1"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:        }
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:    ],
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:    "2": [
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:        {
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "devices": [
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "/dev/loop5"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            ],
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_name": "ceph_lv2",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_size": "21470642176",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "name": "ceph_lv2",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "tags": {
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.cluster_name": "ceph",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.crush_device_class": "",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.encrypted": "0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.objectstore": "bluestore",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osd_id": "2",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.type": "block",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.vdo": "0",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:                "ceph.with_tpm": "0"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            },
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "type": "block",
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:            "vg_name": "ceph_vg2"
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:        }
Jan 29 12:02:14 np0005601226 sad_liskov[164029]:    ]
Jan 29 12:02:14 np0005601226 sad_liskov[164029]: }
Jan 29 12:02:14 np0005601226 systemd[1]: libpod-c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1.scope: Deactivated successfully.
Jan 29 12:02:14 np0005601226 podman[164013]: 2026-01-29 17:02:14.421431539 +0000 UTC m=+0.418048562 container died c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:02:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0e95fc39c8b4730d0515badc88b9ba8293c9aa9f4ce777c28fd3ede2382ac423-merged.mount: Deactivated successfully.
Jan 29 12:02:14 np0005601226 podman[164013]: 2026-01-29 17:02:14.464572088 +0000 UTC m=+0.461189111 container remove c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_liskov, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:02:14 np0005601226 systemd[1]: libpod-conmon-c5e230590aa6b7fdd7a941370c76fb66d69032667373631e84db954bd0409ca1.scope: Deactivated successfully.
Jan 29 12:02:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:14 np0005601226 podman[164110]: 2026-01-29 17:02:14.901103151 +0000 UTC m=+0.072873426 container create 112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:02:14 np0005601226 podman[164110]: 2026-01-29 17:02:14.847777086 +0000 UTC m=+0.019547411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:02:14 np0005601226 systemd[1]: Started libpod-conmon-112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd.scope.
Jan 29 12:02:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:02:15 np0005601226 podman[164110]: 2026-01-29 17:02:15.024623269 +0000 UTC m=+0.196393544 container init 112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wiles, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:02:15 np0005601226 podman[164110]: 2026-01-29 17:02:15.032634136 +0000 UTC m=+0.204404391 container start 112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:02:15 np0005601226 admiring_wiles[164127]: 167 167
Jan 29 12:02:15 np0005601226 systemd[1]: libpod-112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd.scope: Deactivated successfully.
Jan 29 12:02:15 np0005601226 conmon[164127]: conmon 112fd83b49dd120aea74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd.scope/container/memory.events
Jan 29 12:02:15 np0005601226 podman[164110]: 2026-01-29 17:02:15.047911891 +0000 UTC m=+0.219682176 container attach 112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:02:15 np0005601226 podman[164110]: 2026-01-29 17:02:15.048424074 +0000 UTC m=+0.220194329 container died 112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:02:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3a99edbaaa3c692cbce5f525afbf0c78bc361f8827578f5efffd715ac54c0836-merged.mount: Deactivated successfully.
Jan 29 12:02:15 np0005601226 podman[164110]: 2026-01-29 17:02:15.15191811 +0000 UTC m=+0.323688365 container remove 112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_wiles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:02:15 np0005601226 systemd[1]: libpod-conmon-112fd83b49dd120aea747c167b63c7745deade5a2bdb76c6a8ae52f9a3c43ebd.scope: Deactivated successfully.
Jan 29 12:02:15 np0005601226 podman[164153]: 2026-01-29 17:02:15.278137531 +0000 UTC m=+0.045854223 container create 5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 12:02:15 np0005601226 systemd[1]: Started libpod-conmon-5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d.scope.
Jan 29 12:02:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:02:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea068df531b30a85b6d4ec1c70d9073487f21e9b41464b86031fd74cecbbe4c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea068df531b30a85b6d4ec1c70d9073487f21e9b41464b86031fd74cecbbe4c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea068df531b30a85b6d4ec1c70d9073487f21e9b41464b86031fd74cecbbe4c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea068df531b30a85b6d4ec1c70d9073487f21e9b41464b86031fd74cecbbe4c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:02:15 np0005601226 podman[164153]: 2026-01-29 17:02:15.25336318 +0000 UTC m=+0.021079872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:02:15 np0005601226 podman[164153]: 2026-01-29 17:02:15.480095136 +0000 UTC m=+0.247811858 container init 5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bhabha, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:02:15 np0005601226 podman[164153]: 2026-01-29 17:02:15.485838501 +0000 UTC m=+0.253555173 container start 5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bhabha, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:02:15 np0005601226 podman[164153]: 2026-01-29 17:02:15.529423923 +0000 UTC m=+0.297140615 container attach 5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bhabha, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:02:16 np0005601226 lvm[164246]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:02:16 np0005601226 lvm[164246]: VG ceph_vg0 finished
Jan 29 12:02:16 np0005601226 lvm[164249]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:02:16 np0005601226 lvm[164249]: VG ceph_vg1 finished
Jan 29 12:02:16 np0005601226 lvm[164251]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:02:16 np0005601226 lvm[164251]: VG ceph_vg2 finished
Jan 29 12:02:16 np0005601226 wizardly_bhabha[164170]: {}
Jan 29 12:02:16 np0005601226 systemd[1]: libpod-5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d.scope: Deactivated successfully.
Jan 29 12:02:16 np0005601226 podman[164153]: 2026-01-29 17:02:16.245013929 +0000 UTC m=+1.012730591 container died 5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 12:02:16 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ea068df531b30a85b6d4ec1c70d9073487f21e9b41464b86031fd74cecbbe4c3-merged.mount: Deactivated successfully.
Jan 29 12:02:16 np0005601226 podman[164153]: 2026-01-29 17:02:16.296970538 +0000 UTC m=+1.064687210 container remove 5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:02:16 np0005601226 systemd[1]: libpod-conmon-5126cca4ef160265f7be9c33cf9ef9fd518c62ef036c5bf6dda3a2bdfbbcbf6d.scope: Deactivated successfully.
Jan 29 12:02:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:02:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:02:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:02:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:35 np0005601226 podman[181015]: 2026-01-29 17:02:35.9087707 +0000 UTC m=+0.073947237 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 29 12:02:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:02:40.264 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:02:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:02:40.265 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:02:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:02:40.265 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:02:40
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms', 'images', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:02:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:02:40 np0005601226 podman[181191]: 2026-01-29 17:02:40.897074712 +0000 UTC m=+0.072244750 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 29 12:02:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:49 np0005601226 kernel: SELinux:  Converting 2778 SID table entries...
Jan 29 12:02:49 np0005601226 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 12:02:49 np0005601226 kernel: SELinux:  policy capability open_perms=1
Jan 29 12:02:49 np0005601226 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 12:02:49 np0005601226 kernel: SELinux:  policy capability always_check_network=0
Jan 29 12:02:49 np0005601226 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 12:02:49 np0005601226 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 12:02:49 np0005601226 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 29 12:02:50 np0005601226 dbus-broker-launch[813]: Noticed file-system modification, trigger reload.
Jan 29 12:02:50 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 29 12:02:50 np0005601226 dbus-broker-launch[813]: Noticed file-system modification, trigger reload.
Jan 29 12:02:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:02:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:02:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:02:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:02:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:01 np0005601226 systemd[1]: Stopping OpenSSH server daemon...
Jan 29 12:03:01 np0005601226 systemd[1]: sshd.service: Deactivated successfully.
Jan 29 12:03:01 np0005601226 systemd[1]: Stopped OpenSSH server daemon.
Jan 29 12:03:01 np0005601226 systemd[1]: sshd.service: Consumed 2.470s CPU time, read 564.0K from disk, written 40.0K to disk.
Jan 29 12:03:01 np0005601226 systemd[1]: Stopped target sshd-keygen.target.
Jan 29 12:03:01 np0005601226 systemd[1]: Stopping sshd-keygen.target...
Jan 29 12:03:01 np0005601226 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 12:03:01 np0005601226 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 12:03:01 np0005601226 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 29 12:03:01 np0005601226 systemd[1]: Reached target sshd-keygen.target.
Jan 29 12:03:01 np0005601226 systemd[1]: Starting OpenSSH server daemon...
Jan 29 12:03:01 np0005601226 systemd[1]: Started OpenSSH server daemon.
Jan 29 12:03:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:02 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 12:03:02 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 12:03:02 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:02 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:02 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:03 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 12:03:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:06 np0005601226 podman[188949]: 2026-01-29 17:03:06.927023286 +0000 UTC m=+0.093635206 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:03:07 np0005601226 python3.9[189760]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 12:03:07 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:07 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:07 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:08 np0005601226 python3.9[190966]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 12:03:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:08 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:08 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:08 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:09 np0005601226 python3.9[191157]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 12:03:09 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:10 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:10 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:03:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:03:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:03:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:03:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:03:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:03:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:11 np0005601226 podman[191313]: 2026-01-29 17:03:11.104955698 +0000 UTC m=+0.112169685 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 29 12:03:11 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 12:03:11 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 12:03:11 np0005601226 systemd[1]: man-db-cache-update.service: Consumed 6.930s CPU time.
Jan 29 12:03:11 np0005601226 systemd[1]: run-r5430faa549d542a5a17c4f5f2102c3bf.service: Deactivated successfully.
Jan 29 12:03:11 np0005601226 python3.9[191483]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 12:03:11 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:11 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:11 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:12 np0005601226 python3.9[191674]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:12 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:12 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:12 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:13 np0005601226 python3.9[191864]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:13 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:13 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:13 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:14 np0005601226 python3.9[192054]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:14 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:14 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:14 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:15 np0005601226 python3.9[192244]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:16 np0005601226 python3.9[192399]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:03:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:03:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:03:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:03:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:03:17 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:17 np0005601226 podman[192546]: 2026-01-29 17:03:17.288217954 +0000 UTC m=+0.017228893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:03:17 np0005601226 podman[192546]: 2026-01-29 17:03:17.411355838 +0000 UTC m=+0.140366767 container create 1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banach, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 12:03:17 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:17 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:17 np0005601226 systemd[1]: Started libpod-conmon-1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637.scope.
Jan 29 12:03:17 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:03:17 np0005601226 podman[192546]: 2026-01-29 17:03:17.780278944 +0000 UTC m=+0.509289883 container init 1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 12:03:17 np0005601226 podman[192546]: 2026-01-29 17:03:17.788799897 +0000 UTC m=+0.517810816 container start 1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:03:17 np0005601226 hardcore_banach[192599]: 167 167
Jan 29 12:03:17 np0005601226 systemd[1]: libpod-1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637.scope: Deactivated successfully.
Jan 29 12:03:17 np0005601226 podman[192546]: 2026-01-29 17:03:17.794097582 +0000 UTC m=+0.523108551 container attach 1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banach, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:03:17 np0005601226 podman[192546]: 2026-01-29 17:03:17.794917215 +0000 UTC m=+0.523928174 container died 1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 12:03:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6912758e834e7bd930d7bbc9795a3e97d0e85f3ee391480e55bc6f59fc209f6c-merged.mount: Deactivated successfully.
Jan 29 12:03:17 np0005601226 podman[192546]: 2026-01-29 17:03:17.849567312 +0000 UTC m=+0.578578241 container remove 1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:03:17 np0005601226 systemd[1]: libpod-conmon-1fc2e00648d0decc65bd003c7381f39be6f9fad4883b0bd16e68589660c19637.scope: Deactivated successfully.
Jan 29 12:03:17 np0005601226 podman[192690]: 2026-01-29 17:03:17.979130261 +0000 UTC m=+0.036794949 container create 6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 12:03:18 np0005601226 systemd[1]: Started libpod-conmon-6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367.scope.
Jan 29 12:03:18 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:03:18 np0005601226 podman[192690]: 2026-01-29 17:03:17.961724775 +0000 UTC m=+0.019389493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:03:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24f785b1a4c98306ea25dc16029c59fcb09abf3f50b84788d64d846a4fa3eb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24f785b1a4c98306ea25dc16029c59fcb09abf3f50b84788d64d846a4fa3eb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24f785b1a4c98306ea25dc16029c59fcb09abf3f50b84788d64d846a4fa3eb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24f785b1a4c98306ea25dc16029c59fcb09abf3f50b84788d64d846a4fa3eb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d24f785b1a4c98306ea25dc16029c59fcb09abf3f50b84788d64d846a4fa3eb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:18 np0005601226 podman[192690]: 2026-01-29 17:03:18.090808141 +0000 UTC m=+0.148472839 container init 6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:03:18 np0005601226 podman[192690]: 2026-01-29 17:03:18.098205623 +0000 UTC m=+0.155870301 container start 6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:03:18 np0005601226 podman[192690]: 2026-01-29 17:03:18.102028718 +0000 UTC m=+0.159693406 container attach 6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:03:18 np0005601226 python3.9[192795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 29 12:03:18 np0005601226 romantic_faraday[192738]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:03:18 np0005601226 romantic_faraday[192738]: --> All data devices are unavailable
Jan 29 12:03:18 np0005601226 systemd[1]: libpod-6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367.scope: Deactivated successfully.
Jan 29 12:03:18 np0005601226 podman[192690]: 2026-01-29 17:03:18.51279742 +0000 UTC m=+0.570462098 container died 6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:03:18 np0005601226 systemd[1]: Reloading.
Jan 29 12:03:18 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:03:18 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:03:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:18 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d24f785b1a4c98306ea25dc16029c59fcb09abf3f50b84788d64d846a4fa3eb2-merged.mount: Deactivated successfully.
Jan 29 12:03:18 np0005601226 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 29 12:03:18 np0005601226 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 29 12:03:18 np0005601226 podman[192690]: 2026-01-29 17:03:18.788173004 +0000 UTC m=+0.845837672 container remove 6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 12:03:18 np0005601226 systemd[1]: libpod-conmon-6273861c462ec2a687007e10f2d267eea21cb475426d5174af2d60c605063367.scope: Deactivated successfully.
Jan 29 12:03:19 np0005601226 podman[193050]: 2026-01-29 17:03:19.181231652 +0000 UTC m=+0.033058027 container create 9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:03:19 np0005601226 systemd[1]: Started libpod-conmon-9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107.scope.
Jan 29 12:03:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:03:19 np0005601226 podman[193050]: 2026-01-29 17:03:19.259253489 +0000 UTC m=+0.111079944 container init 9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:03:19 np0005601226 podman[193050]: 2026-01-29 17:03:19.167581168 +0000 UTC m=+0.019407563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:03:19 np0005601226 podman[193050]: 2026-01-29 17:03:19.268142433 +0000 UTC m=+0.119968808 container start 9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:03:19 np0005601226 podman[193050]: 2026-01-29 17:03:19.271417343 +0000 UTC m=+0.123243748 container attach 9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 12:03:19 np0005601226 great_cray[193094]: 167 167
Jan 29 12:03:19 np0005601226 systemd[1]: libpod-9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107.scope: Deactivated successfully.
Jan 29 12:03:19 np0005601226 podman[193050]: 2026-01-29 17:03:19.273475279 +0000 UTC m=+0.125301654 container died 9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 12:03:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-228befbd6275dc20e58bec3e70d2dd404d2671b1cbc1000318d54f505c63f525-merged.mount: Deactivated successfully.
Jan 29 12:03:19 np0005601226 podman[193050]: 2026-01-29 17:03:19.307592144 +0000 UTC m=+0.159418519 container remove 9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:03:19 np0005601226 systemd[1]: libpod-conmon-9ac476ed9d0b0c1c2c8a037c351852174f13b8adf308f03afffc05e6f7722107.scope: Deactivated successfully.
Jan 29 12:03:19 np0005601226 podman[193119]: 2026-01-29 17:03:19.430812729 +0000 UTC m=+0.039585985 container create e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:03:19 np0005601226 systemd[1]: Started libpod-conmon-e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14.scope.
Jan 29 12:03:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:03:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4cc38f5d0f316bc0921e6d4a15df51172e91c950a04522cb03d31cc588506c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4cc38f5d0f316bc0921e6d4a15df51172e91c950a04522cb03d31cc588506c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4cc38f5d0f316bc0921e6d4a15df51172e91c950a04522cb03d31cc588506c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4cc38f5d0f316bc0921e6d4a15df51172e91c950a04522cb03d31cc588506c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:19 np0005601226 python3.9[193095]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:19 np0005601226 podman[193119]: 2026-01-29 17:03:19.414768189 +0000 UTC m=+0.023541485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:03:19 np0005601226 podman[193119]: 2026-01-29 17:03:19.519727455 +0000 UTC m=+0.128500741 container init e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:03:19 np0005601226 podman[193119]: 2026-01-29 17:03:19.524979519 +0000 UTC m=+0.133752785 container start e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 12:03:19 np0005601226 podman[193119]: 2026-01-29 17:03:19.52828176 +0000 UTC m=+0.137055026 container attach e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:03:19 np0005601226 funny_gates[193136]: {
Jan 29 12:03:19 np0005601226 funny_gates[193136]:    "0": [
Jan 29 12:03:19 np0005601226 funny_gates[193136]:        {
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "devices": [
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "/dev/loop3"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            ],
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_name": "ceph_lv0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_size": "21470642176",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "name": "ceph_lv0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "tags": {
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cluster_name": "ceph",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.crush_device_class": "",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.encrypted": "0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.objectstore": "bluestore",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osd_id": "0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.type": "block",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.vdo": "0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.with_tpm": "0"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            },
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "type": "block",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "vg_name": "ceph_vg0"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:        }
Jan 29 12:03:19 np0005601226 funny_gates[193136]:    ],
Jan 29 12:03:19 np0005601226 funny_gates[193136]:    "1": [
Jan 29 12:03:19 np0005601226 funny_gates[193136]:        {
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "devices": [
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "/dev/loop4"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            ],
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_name": "ceph_lv1",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_size": "21470642176",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "name": "ceph_lv1",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "tags": {
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cluster_name": "ceph",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.crush_device_class": "",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.encrypted": "0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.objectstore": "bluestore",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osd_id": "1",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.type": "block",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.vdo": "0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.with_tpm": "0"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            },
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "type": "block",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "vg_name": "ceph_vg1"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:        }
Jan 29 12:03:19 np0005601226 funny_gates[193136]:    ],
Jan 29 12:03:19 np0005601226 funny_gates[193136]:    "2": [
Jan 29 12:03:19 np0005601226 funny_gates[193136]:        {
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "devices": [
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "/dev/loop5"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            ],
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_name": "ceph_lv2",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_size": "21470642176",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "name": "ceph_lv2",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "tags": {
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.cluster_name": "ceph",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.crush_device_class": "",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.encrypted": "0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.objectstore": "bluestore",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osd_id": "2",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.type": "block",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.vdo": "0",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:                "ceph.with_tpm": "0"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            },
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "type": "block",
Jan 29 12:03:19 np0005601226 funny_gates[193136]:            "vg_name": "ceph_vg2"
Jan 29 12:03:19 np0005601226 funny_gates[193136]:        }
Jan 29 12:03:19 np0005601226 funny_gates[193136]:    ]
Jan 29 12:03:19 np0005601226 funny_gates[193136]: }
Jan 29 12:03:19 np0005601226 systemd[1]: libpod-e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14.scope: Deactivated successfully.
Jan 29 12:03:19 np0005601226 podman[193119]: 2026-01-29 17:03:19.846065755 +0000 UTC m=+0.454839021 container died e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:03:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-5e4cc38f5d0f316bc0921e6d4a15df51172e91c950a04522cb03d31cc588506c-merged.mount: Deactivated successfully.
Jan 29 12:03:19 np0005601226 podman[193119]: 2026-01-29 17:03:19.880827507 +0000 UTC m=+0.489600783 container remove e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 12:03:19 np0005601226 systemd[1]: libpod-conmon-e673514c3ac2b7bcb60a7241e5be70e90fe302892cfe7bfd2d8a7c9ba8ff0f14.scope: Deactivated successfully.
Jan 29 12:03:20 np0005601226 podman[193373]: 2026-01-29 17:03:20.230837935 +0000 UTC m=+0.034570297 container create 83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_knuth, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:03:20 np0005601226 systemd[1]: Started libpod-conmon-83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d.scope.
Jan 29 12:03:20 np0005601226 python3.9[193335]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:20 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:03:20 np0005601226 podman[193373]: 2026-01-29 17:03:20.294633933 +0000 UTC m=+0.098366315 container init 83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:03:20 np0005601226 podman[193373]: 2026-01-29 17:03:20.304431242 +0000 UTC m=+0.108163614 container start 83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_knuth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:03:20 np0005601226 systemd[1]: libpod-83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d.scope: Deactivated successfully.
Jan 29 12:03:20 np0005601226 distracted_knuth[193389]: 167 167
Jan 29 12:03:20 np0005601226 podman[193373]: 2026-01-29 17:03:20.308298668 +0000 UTC m=+0.112031060 container attach 83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_knuth, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:03:20 np0005601226 podman[193373]: 2026-01-29 17:03:20.308828722 +0000 UTC m=+0.112561094 container died 83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 12:03:20 np0005601226 podman[193373]: 2026-01-29 17:03:20.21422994 +0000 UTC m=+0.017962332 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:03:20 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0df9a068f778a96d504a13d72848cc29565e237fc4eab9e92789b0f083a770e1-merged.mount: Deactivated successfully.
Jan 29 12:03:20 np0005601226 podman[193373]: 2026-01-29 17:03:20.371641543 +0000 UTC m=+0.175373915 container remove 83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_knuth, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 12:03:20 np0005601226 systemd[1]: libpod-conmon-83f0ab3f815cee93f06d4e1fde2795dd640eeabbff7ad9263d1812c8e3742d0d.scope: Deactivated successfully.
Jan 29 12:03:20 np0005601226 podman[193465]: 2026-01-29 17:03:20.489866661 +0000 UTC m=+0.030509676 container create fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:03:20 np0005601226 systemd[1]: Started libpod-conmon-fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc.scope.
Jan 29 12:03:20 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:03:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1aeb8b9519b0d49b428bec313b872918c2e3d313ee9c11c3fc92a4cfbfdc048/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1aeb8b9519b0d49b428bec313b872918c2e3d313ee9c11c3fc92a4cfbfdc048/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1aeb8b9519b0d49b428bec313b872918c2e3d313ee9c11c3fc92a4cfbfdc048/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1aeb8b9519b0d49b428bec313b872918c2e3d313ee9c11c3fc92a4cfbfdc048/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:03:20 np0005601226 podman[193465]: 2026-01-29 17:03:20.55623498 +0000 UTC m=+0.096878015 container init fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nightingale, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:03:20 np0005601226 podman[193465]: 2026-01-29 17:03:20.561272907 +0000 UTC m=+0.101915922 container start fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 12:03:20 np0005601226 podman[193465]: 2026-01-29 17:03:20.564349252 +0000 UTC m=+0.104992527 container attach fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:03:20 np0005601226 podman[193465]: 2026-01-29 17:03:20.476154996 +0000 UTC m=+0.016798051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:03:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:20 np0005601226 python3.9[193590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:21 np0005601226 lvm[193689]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:03:21 np0005601226 lvm[193689]: VG ceph_vg0 finished
Jan 29 12:03:21 np0005601226 lvm[193692]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:03:21 np0005601226 lvm[193692]: VG ceph_vg1 finished
Jan 29 12:03:21 np0005601226 lvm[193698]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:03:21 np0005601226 lvm[193698]: VG ceph_vg2 finished
Jan 29 12:03:21 np0005601226 upbeat_nightingale[193511]: {}
Jan 29 12:03:21 np0005601226 systemd[1]: libpod-fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc.scope: Deactivated successfully.
Jan 29 12:03:21 np0005601226 podman[193465]: 2026-01-29 17:03:21.267559736 +0000 UTC m=+0.808202751 container died fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 29 12:03:21 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f1aeb8b9519b0d49b428bec313b872918c2e3d313ee9c11c3fc92a4cfbfdc048-merged.mount: Deactivated successfully.
Jan 29 12:03:21 np0005601226 podman[193465]: 2026-01-29 17:03:21.305911436 +0000 UTC m=+0.846554451 container remove fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_nightingale, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:03:21 np0005601226 systemd[1]: libpod-conmon-fcd8ba885f1404cdd2e049cc070fd8f36ca4bd0f856731770b212ccedac45cbc.scope: Deactivated successfully.
Jan 29 12:03:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:03:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:03:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:03:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:03:21 np0005601226 python3.9[193842]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:03:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:03:22 np0005601226 python3.9[194015]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:23 np0005601226 python3.9[194170]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:23 np0005601226 python3.9[194325]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:24 np0005601226 python3.9[194480]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:25 np0005601226 python3.9[194635]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:25 np0005601226 python3.9[194790]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:26 np0005601226 python3.9[194945]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:27 np0005601226 python3.9[195100]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:27 np0005601226 python3.9[195255]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:28 np0005601226 python3.9[195410]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 29 12:03:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:29 np0005601226 python3.9[195565]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:03:29 np0005601226 python3.9[195717]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:03:30 np0005601226 python3.9[195869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:03:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:30 np0005601226 python3.9[196021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:03:31 np0005601226 python3.9[196173]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:03:31 np0005601226 python3.9[196325]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:03:32 np0005601226 python3.9[196475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 12:03:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:33 np0005601226 python3.9[196627]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:33 np0005601226 python3.9[196752]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706212.5603845-557-94493216075422/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:34 np0005601226 python3.9[196904]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:34 np0005601226 python3.9[197029]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706213.850123-557-114953468155288/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:35 np0005601226 python3.9[197181]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:35 np0005601226 python3.9[197306]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706214.9833038-557-246763027832825/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:36 np0005601226 python3.9[197458]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:36 np0005601226 python3.9[197583]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706215.9978912-557-222321055217010/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:37 np0005601226 podman[197707]: 2026-01-29 17:03:37.271999263 +0000 UTC m=+0.064560936 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 29 12:03:37 np0005601226 python3.9[197751]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:37 np0005601226 python3.9[197885]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706217.0041418-557-20064636829580/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:38 np0005601226 python3.9[198037]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:38 np0005601226 python3.9[198162]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706218.002181-557-13816924015878/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:39 np0005601226 python3.9[198314]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:39 np0005601226 python3.9[198437]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706219.002151-557-124634684911229/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:03:40.266 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:03:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:03:40.266 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:03:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:03:40.266 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:03:40 np0005601226 python3.9[198589]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:03:40
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', '.mgr']
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:03:41 np0005601226 python3.9[198714]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769706220.0418117-557-278122234813805/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:41 np0005601226 podman[198838]: 2026-01-29 17:03:41.433373619 +0000 UTC m=+0.080949243 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 29 12:03:41 np0005601226 python3.9[198883]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 29 12:03:42 np0005601226 python3.9[199038]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:42 np0005601226 python3.9[199190]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:43 np0005601226 python3.9[199342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:43 np0005601226 python3.9[199494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:44 np0005601226 python3.9[199646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:44 np0005601226 python3.9[199798]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:45 np0005601226 python3.9[199950]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:45 np0005601226 python3.9[200102]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:46 np0005601226 python3.9[200254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:46 np0005601226 python3.9[200406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:47 np0005601226 python3.9[200558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:47 np0005601226 python3.9[200710]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:48 np0005601226 python3.9[200862]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:48 np0005601226 python3.9[201014]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:49 np0005601226 python3.9[201166]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:49 np0005601226 python3.9[201289]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706229.1322565-778-159160284516957/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:50 np0005601226 python3.9[201441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:50 np0005601226 python3.9[201564]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706230.0954034-778-137181724656631/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:03:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:03:51 np0005601226 python3.9[201716]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:51 np0005601226 python3.9[201839]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706231.0588398-778-85189630269912/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:52 np0005601226 python3.9[201991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:52 np0005601226 python3.9[202114]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706232.054407-778-142983639732967/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:53 np0005601226 python3.9[202266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:53 np0005601226 python3.9[202389]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706233.0047617-778-51743865736231/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:54 np0005601226 python3.9[202541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:55 np0005601226 python3.9[202664]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706233.9456358-778-45925095534254/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:55 np0005601226 python3.9[202816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:56 np0005601226 python3.9[202939]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706235.3397892-778-25000622600441/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:56 np0005601226 python3.9[203091]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:57 np0005601226 python3.9[203214]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706236.260817-778-36545102958308/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:57 np0005601226 python3.9[203366]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:03:58 np0005601226 python3.9[203489]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706237.2357237-778-228065838164125/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:03:58 np0005601226 python3.9[203641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:03:59 np0005601226 python3.9[203764]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706238.221126-778-258606692022426/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:03:59 np0005601226 python3.9[203916]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:00 np0005601226 python3.9[204039]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706239.2790508-778-13234397336560/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:00 np0005601226 python3.9[204191]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:01 np0005601226 python3.9[204314]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706240.2115772-778-215881356525628/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:01 np0005601226 python3.9[204466]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:01 np0005601226 python3.9[204589]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706241.1560724-778-274684660083907/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:02 np0005601226 python3.9[204741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:02 np0005601226 auditd[703]: Audit daemon rotating log files
Jan 29 12:04:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:02 np0005601226 python3.9[204864]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706242.0804398-778-101981049267693/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:03 np0005601226 python3.9[205014]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:04:04 np0005601226 python3.9[205169]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 29 12:04:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:06 np0005601226 ceph-osd[85858]: bluestore.MempoolThread fragmentation_score=0.000139 took=0.000029s
Jan 29 12:04:06 np0005601226 ceph-osd[87958]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000034s
Jan 29 12:04:06 np0005601226 ceph-osd[86917]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000032s
Jan 29 12:04:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:07 np0005601226 dbus-broker-launch[814]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 29 12:04:07 np0005601226 podman[205298]: 2026-01-29 17:04:07.665776838 +0000 UTC m=+0.069628185 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 29 12:04:07 np0005601226 python3.9[205343]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:08 np0005601226 python3.9[205504]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:08 np0005601226 python3.9[205656]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:09 np0005601226 python3.9[205808]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:09 np0005601226 python3.9[205960]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:10 np0005601226 python3.9[206112]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:04:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:04:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:04:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:04:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:04:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:04:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:10 np0005601226 python3.9[206264]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:11 np0005601226 python3.9[206416]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:11 np0005601226 podman[206540]: 2026-01-29 17:04:11.656819695 +0000 UTC m=+0.038898093 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 29 12:04:11 np0005601226 python3.9[206587]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:12 np0005601226 python3.9[206739]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:13 np0005601226 python3.9[206891]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:04:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:13 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:13 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:13 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:13 np0005601226 systemd[1]: Starting libvirt logging daemon socket...
Jan 29 12:04:13 np0005601226 systemd[1]: Listening on libvirt logging daemon socket.
Jan 29 12:04:13 np0005601226 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 29 12:04:13 np0005601226 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 29 12:04:13 np0005601226 systemd[1]: Starting libvirt logging daemon...
Jan 29 12:04:13 np0005601226 systemd[1]: Started libvirt logging daemon.
Jan 29 12:04:14 np0005601226 python3.9[207085]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:04:14 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:14 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:14 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:14 np0005601226 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 29 12:04:14 np0005601226 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 29 12:04:14 np0005601226 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 29 12:04:14 np0005601226 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 29 12:04:14 np0005601226 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 29 12:04:14 np0005601226 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 29 12:04:14 np0005601226 systemd[1]: Starting libvirt nodedev daemon...
Jan 29 12:04:14 np0005601226 systemd[1]: Started libvirt nodedev daemon.
Jan 29 12:04:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:15 np0005601226 python3.9[207301]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:04:15 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:15 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:15 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:15 np0005601226 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 29 12:04:15 np0005601226 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 29 12:04:15 np0005601226 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 29 12:04:15 np0005601226 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 29 12:04:15 np0005601226 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 29 12:04:15 np0005601226 systemd[1]: Starting libvirt proxy daemon...
Jan 29 12:04:15 np0005601226 systemd[1]: Started libvirt proxy daemon.
Jan 29 12:04:15 np0005601226 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 29 12:04:15 np0005601226 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 29 12:04:15 np0005601226 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 29 12:04:16 np0005601226 python3.9[207512]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:04:16 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:16 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:16 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:16 np0005601226 systemd[1]: Listening on libvirt locking daemon socket.
Jan 29 12:04:16 np0005601226 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 29 12:04:16 np0005601226 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 29 12:04:16 np0005601226 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 29 12:04:16 np0005601226 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 29 12:04:16 np0005601226 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 29 12:04:16 np0005601226 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 29 12:04:16 np0005601226 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 29 12:04:16 np0005601226 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 29 12:04:16 np0005601226 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 29 12:04:16 np0005601226 systemd[1]: Starting libvirt QEMU daemon...
Jan 29 12:04:16 np0005601226 systemd[1]: Started libvirt QEMU daemon.
Jan 29 12:04:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:16 np0005601226 setroubleshoot[207337]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 66128271-704d-4132-bc46-2357c118f042
Jan 29 12:04:16 np0005601226 setroubleshoot[207337]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 29 12:04:16 np0005601226 setroubleshoot[207337]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 66128271-704d-4132-bc46-2357c118f042
Jan 29 12:04:16 np0005601226 setroubleshoot[207337]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 29 12:04:17 np0005601226 python3.9[207736]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:04:17 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:17 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:17 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:17 np0005601226 systemd[1]: Starting libvirt secret daemon socket...
Jan 29 12:04:17 np0005601226 systemd[1]: Listening on libvirt secret daemon socket.
Jan 29 12:04:17 np0005601226 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 29 12:04:17 np0005601226 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 29 12:04:17 np0005601226 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 29 12:04:17 np0005601226 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 29 12:04:17 np0005601226 systemd[1]: Starting libvirt secret daemon...
Jan 29 12:04:17 np0005601226 systemd[1]: Started libvirt secret daemon.
Jan 29 12:04:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:18 np0005601226 python3.9[207949]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:18 np0005601226 python3.9[208101]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 12:04:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:19 np0005601226 python3.9[208253]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:04:19 np0005601226 python3.9[208407]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 12:04:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:20 np0005601226 python3.9[208557]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:21 np0005601226 python3.9[208678]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706260.2843637-1136-243221726547071/.source.xml follow=False _original_basename=secret.xml.j2 checksum=b9f29682cd191d70437e7cbafeebc16acb2b7fb2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.297273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706261297308, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2037, "num_deletes": 251, "total_data_size": 3613870, "memory_usage": 3673000, "flush_reason": "Manual Compaction"}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706261346324, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3529825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9877, "largest_seqno": 11913, "table_properties": {"data_size": 3520534, "index_size": 5913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17751, "raw_average_key_size": 19, "raw_value_size": 3502185, "raw_average_value_size": 3835, "num_data_blocks": 267, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769706024, "oldest_key_time": 1769706024, "file_creation_time": 1769706261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 49136 microseconds, and 4562 cpu microseconds.
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.346401) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3529825 bytes OK
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.346425) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.369949) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.369990) EVENT_LOG_v1 {"time_micros": 1769706261369982, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.370010) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3605384, prev total WAL file size 3605384, number of live WAL files 2.
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.370660) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3447KB)], [26(6918KB)]
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706261370710, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10614653, "oldest_snapshot_seqno": -1}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3801 keys, 8840115 bytes, temperature: kUnknown
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706261588194, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8840115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8810286, "index_size": 19223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91525, "raw_average_key_size": 24, "raw_value_size": 8737289, "raw_average_value_size": 2298, "num_data_blocks": 829, "num_entries": 3801, "num_filter_entries": 3801, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769706261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.588446) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8840115 bytes
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.596997) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.8 rd, 40.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 6.8 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4315, records dropped: 514 output_compression: NoCompression
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.597021) EVENT_LOG_v1 {"time_micros": 1769706261597009, "job": 10, "event": "compaction_finished", "compaction_time_micros": 217573, "compaction_time_cpu_micros": 14525, "output_level": 6, "num_output_files": 1, "total_output_size": 8840115, "num_input_records": 4315, "num_output_records": 3801, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706261597379, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706261597812, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.370617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.597880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.597886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.597889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.597891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:04:21.597893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:04:21 np0005601226 python3.9[208880]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine cc5c72e3-31e0-58b9-8731-456117d38f4a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:04:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:04:22 np0005601226 podman[209118]: 2026-01-29 17:04:22.290934207 +0000 UTC m=+0.018385524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:04:22 np0005601226 python3.9[209146]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:04:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:04:22 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:04:22 np0005601226 podman[209118]: 2026-01-29 17:04:22.589746946 +0000 UTC m=+0.317198233 container create 44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:04:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:22 np0005601226 systemd[1]: Started libpod-conmon-44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee.scope.
Jan 29 12:04:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:04:22 np0005601226 podman[209118]: 2026-01-29 17:04:22.731406028 +0000 UTC m=+0.458857345 container init 44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_moore, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:04:22 np0005601226 podman[209118]: 2026-01-29 17:04:22.741041541 +0000 UTC m=+0.468492818 container start 44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_moore, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:04:22 np0005601226 keen_moore[209199]: 167 167
Jan 29 12:04:22 np0005601226 systemd[1]: libpod-44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee.scope: Deactivated successfully.
Jan 29 12:04:22 np0005601226 podman[209118]: 2026-01-29 17:04:22.753147403 +0000 UTC m=+0.480598700 container attach 44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_moore, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 12:04:22 np0005601226 podman[209118]: 2026-01-29 17:04:22.753530813 +0000 UTC m=+0.480982110 container died 44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:04:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-913c8d10a4853639a10e649b79dfb645f9e7200b5bc651d7d7dba19c51a28ea3-merged.mount: Deactivated successfully.
Jan 29 12:04:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:23 np0005601226 podman[209118]: 2026-01-29 17:04:23.201592911 +0000 UTC m=+0.929044198 container remove 44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_moore, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 12:04:23 np0005601226 systemd[1]: libpod-conmon-44ab058b8e83a47d2207519d7e4e721e079736067202c17fa37124cbae9846ee.scope: Deactivated successfully.
Jan 29 12:04:23 np0005601226 podman[209352]: 2026-01-29 17:04:23.312615225 +0000 UTC m=+0.023384310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:04:23 np0005601226 podman[209352]: 2026-01-29 17:04:23.695541613 +0000 UTC m=+0.406310678 container create 8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:04:24 np0005601226 systemd[1]: Started libpod-conmon-8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940.scope.
Jan 29 12:04:24 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:04:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef102619111e96a7762f5fbdb86165dc0160cf44644506b0943c6949aea6387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef102619111e96a7762f5fbdb86165dc0160cf44644506b0943c6949aea6387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef102619111e96a7762f5fbdb86165dc0160cf44644506b0943c6949aea6387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef102619111e96a7762f5fbdb86165dc0160cf44644506b0943c6949aea6387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:24 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ef102619111e96a7762f5fbdb86165dc0160cf44644506b0943c6949aea6387/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:24 np0005601226 podman[209352]: 2026-01-29 17:04:24.198654786 +0000 UTC m=+0.909423881 container init 8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:04:24 np0005601226 podman[209352]: 2026-01-29 17:04:24.205265347 +0000 UTC m=+0.916034422 container start 8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 12:04:24 np0005601226 podman[209352]: 2026-01-29 17:04:24.381081023 +0000 UTC m=+1.091850088 container attach 8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:04:24 np0005601226 nifty_mirzakhani[209502]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:04:24 np0005601226 nifty_mirzakhani[209502]: --> All data devices are unavailable
Jan 29 12:04:24 np0005601226 python3.9[209665]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:24 np0005601226 systemd[1]: libpod-8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940.scope: Deactivated successfully.
Jan 29 12:04:24 np0005601226 conmon[209502]: conmon 8ff07c5a394c12db6840 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940.scope/container/memory.events
Jan 29 12:04:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:24 np0005601226 podman[209674]: 2026-01-29 17:04:24.67983414 +0000 UTC m=+0.025928590 container died 8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 12:04:25 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0ef102619111e96a7762f5fbdb86165dc0160cf44644506b0943c6949aea6387-merged.mount: Deactivated successfully.
Jan 29 12:04:25 np0005601226 python3.9[209837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:25 np0005601226 podman[209674]: 2026-01-29 17:04:25.391328609 +0000 UTC m=+0.737423059 container remove 8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 12:04:25 np0005601226 systemd[1]: libpod-conmon-8ff07c5a394c12db68404d823a272b272e9bb8175b843925c0413cef73dfb940.scope: Deactivated successfully.
Jan 29 12:04:25 np0005601226 python3.9[210011]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706264.831064-1191-81562906344475/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:25 np0005601226 podman[210026]: 2026-01-29 17:04:25.75629665 +0000 UTC m=+0.026388377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:04:25 np0005601226 podman[210026]: 2026-01-29 17:04:25.91557639 +0000 UTC m=+0.185668097 container create 7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 12:04:26 np0005601226 systemd[1]: Started libpod-conmon-7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42.scope.
Jan 29 12:04:26 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:04:26 np0005601226 podman[210026]: 2026-01-29 17:04:26.260606395 +0000 UTC m=+0.530698122 container init 7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:04:26 np0005601226 podman[210026]: 2026-01-29 17:04:26.26880683 +0000 UTC m=+0.538898527 container start 7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lovelace, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:04:26 np0005601226 eager_lovelace[210158]: 167 167
Jan 29 12:04:26 np0005601226 systemd[1]: libpod-7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42.scope: Deactivated successfully.
Jan 29 12:04:26 np0005601226 python3.9[210196]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:26 np0005601226 podman[210026]: 2026-01-29 17:04:26.443256857 +0000 UTC m=+0.713348564 container attach 7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lovelace, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:04:26 np0005601226 podman[210026]: 2026-01-29 17:04:26.443555705 +0000 UTC m=+0.713647412 container died 7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:04:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:26 np0005601226 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 29 12:04:27 np0005601226 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 29 12:04:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay-39fa0c3d4e9f31f0118fda38612a76abf09ee9b7874537300d0d1d6604c03d87-merged.mount: Deactivated successfully.
Jan 29 12:04:27 np0005601226 python3.9[210362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:27 np0005601226 podman[210026]: 2026-01-29 17:04:27.333637098 +0000 UTC m=+1.603728795 container remove 7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 12:04:27 np0005601226 systemd[1]: libpod-conmon-7c0be608f413bba0fddc5f7378e454bcff202592ef0594de977d93e345da0c42.scope: Deactivated successfully.
Jan 29 12:04:27 np0005601226 podman[210440]: 2026-01-29 17:04:27.428135841 +0000 UTC m=+0.019194089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:04:27 np0005601226 podman[210440]: 2026-01-29 17:04:27.582510865 +0000 UTC m=+0.173569103 container create 152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 12:04:27 np0005601226 python3.9[210459]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:27 np0005601226 systemd[1]: Started libpod-conmon-152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846.scope.
Jan 29 12:04:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:04:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8ebe8487f07872ae06a4deac11b4d806ab32067b13d16cfa992b3d8969eee0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8ebe8487f07872ae06a4deac11b4d806ab32067b13d16cfa992b3d8969eee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8ebe8487f07872ae06a4deac11b4d806ab32067b13d16cfa992b3d8969eee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8ebe8487f07872ae06a4deac11b4d806ab32067b13d16cfa992b3d8969eee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:27 np0005601226 podman[210440]: 2026-01-29 17:04:27.857664265 +0000 UTC m=+0.448722513 container init 152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:04:27 np0005601226 podman[210440]: 2026-01-29 17:04:27.863070144 +0000 UTC m=+0.454128372 container start 152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:04:27 np0005601226 podman[210440]: 2026-01-29 17:04:27.934175033 +0000 UTC m=+0.525233281 container attach 152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]: {
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:    "0": [
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:        {
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "devices": [
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "/dev/loop3"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            ],
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_name": "ceph_lv0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_size": "21470642176",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "name": "ceph_lv0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "tags": {
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cluster_name": "ceph",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.crush_device_class": "",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.encrypted": "0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.objectstore": "bluestore",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osd_id": "0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.type": "block",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.vdo": "0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.with_tpm": "0"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            },
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "type": "block",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "vg_name": "ceph_vg0"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:        }
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:    ],
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:    "1": [
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:        {
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "devices": [
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "/dev/loop4"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            ],
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_name": "ceph_lv1",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_size": "21470642176",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "name": "ceph_lv1",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "tags": {
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cluster_name": "ceph",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.crush_device_class": "",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.encrypted": "0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.objectstore": "bluestore",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osd_id": "1",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.type": "block",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.vdo": "0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.with_tpm": "0"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            },
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "type": "block",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "vg_name": "ceph_vg1"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:        }
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:    ],
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:    "2": [
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:        {
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "devices": [
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "/dev/loop5"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            ],
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_name": "ceph_lv2",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_size": "21470642176",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "name": "ceph_lv2",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "tags": {
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.cluster_name": "ceph",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.crush_device_class": "",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.encrypted": "0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.objectstore": "bluestore",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osd_id": "2",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.type": "block",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.vdo": "0",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:                "ceph.with_tpm": "0"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            },
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "type": "block",
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:            "vg_name": "ceph_vg2"
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:        }
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]:    ]
Jan 29 12:04:28 np0005601226 agitated_joliot[210487]: }
Jan 29 12:04:28 np0005601226 systemd[1]: libpod-152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846.scope: Deactivated successfully.
Jan 29 12:04:28 np0005601226 conmon[210487]: conmon 152f360a0d968b6473ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846.scope/container/memory.events
Jan 29 12:04:28 np0005601226 podman[210440]: 2026-01-29 17:04:28.142313678 +0000 UTC m=+0.733371896 container died 152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:04:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:28 np0005601226 python3.9[210620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:28 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cb8ebe8487f07872ae06a4deac11b4d806ab32067b13d16cfa992b3d8969eee0-merged.mount: Deactivated successfully.
Jan 29 12:04:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:28 np0005601226 python3.9[210712]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n_e5ed8t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:28 np0005601226 podman[210440]: 2026-01-29 17:04:28.945430304 +0000 UTC m=+1.536488522 container remove 152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 12:04:29 np0005601226 systemd[1]: libpod-conmon-152f360a0d968b6473ca24cd7dd2799ac77e69fdf45fded735825bf4eaa5d846.scope: Deactivated successfully.
Jan 29 12:04:29 np0005601226 python3.9[210915]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:29 np0005601226 podman[210928]: 2026-01-29 17:04:29.275747594 +0000 UTC m=+0.016748582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:04:29 np0005601226 podman[210928]: 2026-01-29 17:04:29.41531466 +0000 UTC m=+0.156315618 container create 0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 12:04:29 np0005601226 python3.9[211019]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:29 np0005601226 systemd[1]: Started libpod-conmon-0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0.scope.
Jan 29 12:04:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:04:29 np0005601226 podman[210928]: 2026-01-29 17:04:29.882015517 +0000 UTC m=+0.623016495 container init 0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:04:29 np0005601226 podman[210928]: 2026-01-29 17:04:29.888734743 +0000 UTC m=+0.629735721 container start 0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 29 12:04:29 np0005601226 hardcore_curie[211046]: 167 167
Jan 29 12:04:29 np0005601226 systemd[1]: libpod-0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0.scope: Deactivated successfully.
Jan 29 12:04:30 np0005601226 podman[210928]: 2026-01-29 17:04:30.088853906 +0000 UTC m=+0.829854884 container attach 0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:04:30 np0005601226 podman[210928]: 2026-01-29 17:04:30.089787581 +0000 UTC m=+0.830788539 container died 0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 29 12:04:30 np0005601226 python3.9[211189]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:04:30 np0005601226 systemd[1]: var-lib-containers-storage-overlay-5e5f97d57f380322ebc67c048fac8f8928a6eb4af40a2166d6f693281aaaa0f7-merged.mount: Deactivated successfully.
Jan 29 12:04:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:30 np0005601226 podman[210928]: 2026-01-29 17:04:30.975772051 +0000 UTC m=+1.716773009 container remove 0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:04:30 np0005601226 python3[211343]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 29 12:04:31 np0005601226 systemd[1]: libpod-conmon-0743425485dc854dce5558eb39437a8939ca03616bdb089720986a7e3e384df0.scope: Deactivated successfully.
Jan 29 12:04:31 np0005601226 podman[211364]: 2026-01-29 17:04:31.148411078 +0000 UTC m=+0.093164518 container create 0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:04:31 np0005601226 podman[211364]: 2026-01-29 17:04:31.073813952 +0000 UTC m=+0.018567422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:04:31 np0005601226 systemd[1]: Started libpod-conmon-0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811.scope.
Jan 29 12:04:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:04:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d9bb77a31998af51561595359d3b7741c08e2a4af5a197cab220057c04e71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d9bb77a31998af51561595359d3b7741c08e2a4af5a197cab220057c04e71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d9bb77a31998af51561595359d3b7741c08e2a4af5a197cab220057c04e71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c04d9bb77a31998af51561595359d3b7741c08e2a4af5a197cab220057c04e71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:04:31 np0005601226 podman[211364]: 2026-01-29 17:04:31.483542771 +0000 UTC m=+0.428296231 container init 0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_einstein, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:04:31 np0005601226 podman[211364]: 2026-01-29 17:04:31.489436093 +0000 UTC m=+0.434189533 container start 0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:04:31 np0005601226 podman[211364]: 2026-01-29 17:04:31.504681823 +0000 UTC m=+0.449435263 container attach 0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_einstein, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:04:31 np0005601226 python3.9[211521]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:32 np0005601226 lvm[211676]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:04:32 np0005601226 lvm[211675]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:04:32 np0005601226 lvm[211675]: VG ceph_vg0 finished
Jan 29 12:04:32 np0005601226 lvm[211676]: VG ceph_vg1 finished
Jan 29 12:04:32 np0005601226 lvm[211678]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:04:32 np0005601226 lvm[211678]: VG ceph_vg2 finished
Jan 29 12:04:32 np0005601226 python3.9[211661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:32 np0005601226 objective_einstein[211490]: {}
Jan 29 12:04:32 np0005601226 systemd[1]: libpod-0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811.scope: Deactivated successfully.
Jan 29 12:04:32 np0005601226 systemd[1]: libpod-0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811.scope: Consumed 1.025s CPU time.
Jan 29 12:04:32 np0005601226 podman[211364]: 2026-01-29 17:04:32.227343463 +0000 UTC m=+1.172096903 container died 0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_einstein, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 29 12:04:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c04d9bb77a31998af51561595359d3b7741c08e2a4af5a197cab220057c04e71-merged.mount: Deactivated successfully.
Jan 29 12:04:32 np0005601226 python3.9[211845]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:32 np0005601226 podman[211364]: 2026-01-29 17:04:32.943451743 +0000 UTC m=+1.888205183 container remove 0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 12:04:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:04:32 np0005601226 systemd[1]: libpod-conmon-0009669a2a8b1265eda7063a5ac03259d115ec3adb8e2c3af4205fcce3cee811.scope: Deactivated successfully.
Jan 29 12:04:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:04:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:04:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:04:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:33 np0005601226 python3.9[211971]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706272.289991-1280-196922133835449/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:33 np0005601226 python3.9[212148]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:34 np0005601226 python3.9[212226]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:34 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:04:34 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:04:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:34 np0005601226 python3.9[212378]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:35 np0005601226 python3.9[212456]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:35 np0005601226 python3.9[212608]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:36 np0005601226 python3.9[212733]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769706275.3003385-1319-9460212716326/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:36 np0005601226 python3.9[212887]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:37 np0005601226 python3.9[213039]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:04:37 np0005601226 podman[213043]: 2026-01-29 17:04:37.900586975 +0000 UTC m=+0.075020658 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 29 12:04:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:38 np0005601226 python3.9[213221]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:39 np0005601226 python3.9[213373]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:04:39 np0005601226 python3.9[213526]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:04:40 np0005601226 python3.9[213680]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:04:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:04:40.266 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:04:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:04:40.266 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:04:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:04:40.266 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:04:40
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:04:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:41 np0005601226 python3.9[213835]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:41 np0005601226 python3.9[213987]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:41 np0005601226 podman[214035]: 2026-01-29 17:04:41.893867792 +0000 UTC m=+0.069934787 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:04:42 np0005601226 python3.9[214129]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706281.2819529-1391-69053388312392/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:42 np0005601226 python3.9[214281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:43 np0005601226 python3.9[214404]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706282.291406-1406-241856892922957/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:43 np0005601226 python3.9[214556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:04:44 np0005601226 python3.9[214679]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706283.3435566-1421-202620458258230/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:04:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:44 np0005601226 python3.9[214831]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:04:44 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:45 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:45 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:45 np0005601226 systemd[1]: Reached target edpm_libvirt.target.
Jan 29 12:04:45 np0005601226 python3.9[215022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 29 12:04:45 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:46 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:46 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:46 np0005601226 systemd[1]: Reloading.
Jan 29 12:04:46 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:04:46 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:04:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:47 np0005601226 systemd[1]: session-49.scope: Deactivated successfully.
Jan 29 12:04:47 np0005601226 systemd[1]: session-49.scope: Consumed 2min 46.070s CPU time.
Jan 29 12:04:47 np0005601226 systemd-logind[823]: Session 49 logged out. Waiting for processes to exit.
Jan 29 12:04:47 np0005601226 systemd-logind[823]: Removed session 49.
Jan 29 12:04:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:04:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:04:51 np0005601226 systemd-logind[823]: New session 50 of user zuul.
Jan 29 12:04:51 np0005601226 systemd[1]: Started Session 50 of User zuul.
Jan 29 12:04:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:52 np0005601226 python3.9[215271]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 12:04:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:53 np0005601226 python3.9[215425]: ansible-ansible.builtin.service_facts Invoked
Jan 29 12:04:53 np0005601226 network[215442]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 12:04:53 np0005601226 network[215443]: 'network-scripts' will be removed from distribution in near future.
Jan 29 12:04:53 np0005601226 network[215444]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 12:04:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:04:57 np0005601226 python3.9[215716]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 29 12:04:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:04:58 np0005601226 python3.9[215800]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 12:04:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:04 np0005601226 python3.9[215953]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:05:05 np0005601226 python3.9[216105]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:05:06 np0005601226 python3.9[216258]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:05:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:06 np0005601226 python3.9[216410]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:05:07 np0005601226 python3.9[216563]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:05:08 np0005601226 python3.9[216686]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706307.1184955-90-159103585384706/.source.iscsi _original_basename=.g7ut6cgz follow=False checksum=91728e7b2c7590b75c9c15866cb5e36654e00c1f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:08 np0005601226 podman[216810]: 2026-01-29 17:05:08.68691735 +0000 UTC m=+0.079034189 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 29 12:05:08 np0005601226 python3.9[216853]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:09 np0005601226 python3.9[217014]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:05:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:05:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:05:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:05:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:05:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:05:10 np0005601226 python3.9[217166]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:10 np0005601226 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 29 12:05:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:11 np0005601226 python3.9[217322]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:11 np0005601226 systemd[1]: Reloading.
Jan 29 12:05:11 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:05:11 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:05:11 np0005601226 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 29 12:05:11 np0005601226 systemd[1]: Starting Open-iSCSI...
Jan 29 12:05:11 np0005601226 kernel: Loading iSCSI transport class v2.0-870.
Jan 29 12:05:11 np0005601226 systemd[1]: Started Open-iSCSI.
Jan 29 12:05:11 np0005601226 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 29 12:05:11 np0005601226 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 29 12:05:12 np0005601226 podman[217494]: 2026-01-29 17:05:12.263730953 +0000 UTC m=+0.040136897 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:05:12 np0005601226 python3.9[217531]: ansible-ansible.builtin.service_facts Invoked
Jan 29 12:05:12 np0005601226 network[217556]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 12:05:12 np0005601226 network[217557]: 'network-scripts' will be removed from distribution in near future.
Jan 29 12:05:12 np0005601226 network[217558]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 12:05:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:15 np0005601226 python3.9[217830]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 12:05:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:17 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 12:05:17 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 12:05:17 np0005601226 systemd[1]: Reloading.
Jan 29 12:05:17 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:05:17 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:05:17 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 12:05:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:18 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 12:05:18 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 12:05:18 np0005601226 systemd[1]: run-r740096233bb8469a9da69c4840374967.service: Deactivated successfully.
Jan 29 12:05:19 np0005601226 python3.9[218146]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 29 12:05:19 np0005601226 python3.9[218298]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 29 12:05:20 np0005601226 python3.9[218454]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:05:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:21 np0005601226 python3.9[218577]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706320.1348407-178-121146769172716/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:21 np0005601226 python3.9[218729]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:22 np0005601226 python3.9[218881]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:05:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:22 np0005601226 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 29 12:05:22 np0005601226 systemd[1]: Stopped Load Kernel Modules.
Jan 29 12:05:22 np0005601226 systemd[1]: Stopping Load Kernel Modules...
Jan 29 12:05:22 np0005601226 systemd[1]: Starting Load Kernel Modules...
Jan 29 12:05:22 np0005601226 systemd[1]: Finished Load Kernel Modules.
Jan 29 12:05:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:23 np0005601226 python3.9[219037]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:05:23 np0005601226 python3.9[219190]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:05:24 np0005601226 python3.9[219342]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:05:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:24 np0005601226 python3.9[219465]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706324.0982256-229-141942560617995/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:25 np0005601226 python3.9[219617]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:05:25 np0005601226 python3.9[219770]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:26 np0005601226 python3.9[219922]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:27 np0005601226 python3.9[220074]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:27 np0005601226 python3.9[220226]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:28 np0005601226 python3.9[220378]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:28 np0005601226 python3.9[220530]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:29 np0005601226 python3.9[220682]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:29 np0005601226 python3.9[220834]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:05:30 np0005601226 python3.9[220988]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:05:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:31 np0005601226 python3.9[221141]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:31 np0005601226 systemd[1]: Listening on multipathd control socket.
Jan 29 12:05:31 np0005601226 python3.9[221297]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:32 np0005601226 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 29 12:05:32 np0005601226 udevadm[221302]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 29 12:05:32 np0005601226 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 29 12:05:32 np0005601226 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 29 12:05:32 np0005601226 multipathd[221305]: --------start up--------
Jan 29 12:05:32 np0005601226 multipathd[221305]: read /etc/multipath.conf
Jan 29 12:05:32 np0005601226 multipathd[221305]: path checkers start up
Jan 29 12:05:32 np0005601226 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 29 12:05:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:32 np0005601226 python3.9[221464]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:33 np0005601226 python3.9[221666]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 29 12:05:33 np0005601226 kernel: Key type psk registered
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:05:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:05:34 np0005601226 podman[221900]: 2026-01-29 17:05:33.964534822 +0000 UTC m=+0.016649766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:05:34 np0005601226 podman[221900]: 2026-01-29 17:05:34.149636476 +0000 UTC m=+0.201751390 container create 785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:05:34 np0005601226 python3.9[221931]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:05:34 np0005601226 systemd[1]: Started libpod-conmon-785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09.scope.
Jan 29 12:05:34 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:05:34 np0005601226 podman[221900]: 2026-01-29 17:05:34.310564807 +0000 UTC m=+0.362679751 container init 785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:05:34 np0005601226 podman[221900]: 2026-01-29 17:05:34.316185424 +0000 UTC m=+0.368300338 container start 785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:05:34 np0005601226 kind_elbakyan[221962]: 167 167
Jan 29 12:05:34 np0005601226 systemd[1]: libpod-785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09.scope: Deactivated successfully.
Jan 29 12:05:34 np0005601226 podman[221900]: 2026-01-29 17:05:34.321192283 +0000 UTC m=+0.373307227 container attach 785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:05:34 np0005601226 podman[221900]: 2026-01-29 17:05:34.322333935 +0000 UTC m=+0.374448849 container died 785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:05:34 np0005601226 systemd[1]: var-lib-containers-storage-overlay-90a5e8e6eaa03f331124c99f3765fbeb5a92e86ec0db35f569f348c8a40c136e-merged.mount: Deactivated successfully.
Jan 29 12:05:34 np0005601226 podman[221900]: 2026-01-29 17:05:34.37594082 +0000 UTC m=+0.428055734 container remove 785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:05:34 np0005601226 systemd[1]: libpod-conmon-785a23136bbf8e7872f1a22ce6f4e653657303800d28b4fba76f465d52d89f09.scope: Deactivated successfully.
Jan 29 12:05:34 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:05:34 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:05:34 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:05:34 np0005601226 podman[222064]: 2026-01-29 17:05:34.486027633 +0000 UTC m=+0.034550336 container create 714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:05:34 np0005601226 systemd[1]: Started libpod-conmon-714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2.scope.
Jan 29 12:05:34 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:05:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6baaeb1e9d6ad67d1bca8f7c89a0bf42218cae37345b600a4624c9fcd83b4f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6baaeb1e9d6ad67d1bca8f7c89a0bf42218cae37345b600a4624c9fcd83b4f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6baaeb1e9d6ad67d1bca8f7c89a0bf42218cae37345b600a4624c9fcd83b4f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6baaeb1e9d6ad67d1bca8f7c89a0bf42218cae37345b600a4624c9fcd83b4f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6baaeb1e9d6ad67d1bca8f7c89a0bf42218cae37345b600a4624c9fcd83b4f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:34 np0005601226 podman[222064]: 2026-01-29 17:05:34.471330972 +0000 UTC m=+0.019853675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:05:34 np0005601226 podman[222064]: 2026-01-29 17:05:34.576158967 +0000 UTC m=+0.124681700 container init 714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kepler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:05:34 np0005601226 podman[222064]: 2026-01-29 17:05:34.581786904 +0000 UTC m=+0.130309607 container start 714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 12:05:34 np0005601226 podman[222064]: 2026-01-29 17:05:34.585818157 +0000 UTC m=+0.134340860 container attach 714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kepler, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:05:34 np0005601226 python3.9[222098]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769706333.7535655-359-100103542118643/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:34 np0005601226 dazzling_kepler[222101]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:05:34 np0005601226 dazzling_kepler[222101]: --> All data devices are unavailable
Jan 29 12:05:35 np0005601226 systemd[1]: libpod-714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2.scope: Deactivated successfully.
Jan 29 12:05:35 np0005601226 podman[222064]: 2026-01-29 17:05:35.01096317 +0000 UTC m=+0.559485873 container died 714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kepler, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:05:35 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b6baaeb1e9d6ad67d1bca8f7c89a0bf42218cae37345b600a4624c9fcd83b4f8-merged.mount: Deactivated successfully.
Jan 29 12:05:35 np0005601226 podman[222064]: 2026-01-29 17:05:35.054565797 +0000 UTC m=+0.603088500 container remove 714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:05:35 np0005601226 systemd[1]: libpod-conmon-714c2b87256e2ca16cf2a648aab47e24d22f6ca7dfa680395e0058bf86561dd2.scope: Deactivated successfully.
Jan 29 12:05:35 np0005601226 python3.9[222284]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:35 np0005601226 podman[222371]: 2026-01-29 17:05:35.415163879 +0000 UTC m=+0.036792309 container create 5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_davinci, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:05:35 np0005601226 systemd[1]: Started libpod-conmon-5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788.scope.
Jan 29 12:05:35 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:05:35 np0005601226 podman[222371]: 2026-01-29 17:05:35.400302364 +0000 UTC m=+0.021930824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:05:35 np0005601226 podman[222371]: 2026-01-29 17:05:35.500005986 +0000 UTC m=+0.121634436 container init 5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_davinci, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:05:35 np0005601226 podman[222371]: 2026-01-29 17:05:35.504840351 +0000 UTC m=+0.126468781 container start 5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:05:35 np0005601226 magical_davinci[222410]: 167 167
Jan 29 12:05:35 np0005601226 systemd[1]: libpod-5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788.scope: Deactivated successfully.
Jan 29 12:05:35 np0005601226 podman[222371]: 2026-01-29 17:05:35.509050488 +0000 UTC m=+0.130678938 container attach 5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:05:35 np0005601226 podman[222371]: 2026-01-29 17:05:35.509876381 +0000 UTC m=+0.131504811 container died 5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:05:35 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f0988dd9c9edb35121c866e843e34d41588edc7218fd90b890d725dbf103285a-merged.mount: Deactivated successfully.
Jan 29 12:05:35 np0005601226 podman[222371]: 2026-01-29 17:05:35.540044013 +0000 UTC m=+0.161672443 container remove 5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 12:05:35 np0005601226 systemd[1]: libpod-conmon-5d0dc63575d172c5826de4fc07324a9609088b6d8168884e20d4c531a2490788.scope: Deactivated successfully.
Jan 29 12:05:35 np0005601226 podman[222490]: 2026-01-29 17:05:35.645640239 +0000 UTC m=+0.031019506 container create fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ardinghelli, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 12:05:35 np0005601226 systemd[1]: Started libpod-conmon-fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d.scope.
Jan 29 12:05:35 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:05:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6997e6defc1b28a85eeadebaa95f7c8ab0cc54db3123809a03a32ac4a069ab22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6997e6defc1b28a85eeadebaa95f7c8ab0cc54db3123809a03a32ac4a069ab22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6997e6defc1b28a85eeadebaa95f7c8ab0cc54db3123809a03a32ac4a069ab22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6997e6defc1b28a85eeadebaa95f7c8ab0cc54db3123809a03a32ac4a069ab22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:35 np0005601226 podman[222490]: 2026-01-29 17:05:35.706837867 +0000 UTC m=+0.092217144 container init fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:05:35 np0005601226 podman[222490]: 2026-01-29 17:05:35.714386307 +0000 UTC m=+0.099765614 container start fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:05:35 np0005601226 podman[222490]: 2026-01-29 17:05:35.719973773 +0000 UTC m=+0.105353050 container attach fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:05:35 np0005601226 podman[222490]: 2026-01-29 17:05:35.632088641 +0000 UTC m=+0.017467918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:05:35 np0005601226 python3.9[222560]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]: {
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:    "0": [
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:        {
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "devices": [
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "/dev/loop3"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            ],
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_name": "ceph_lv0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_size": "21470642176",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "name": "ceph_lv0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "tags": {
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cluster_name": "ceph",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.crush_device_class": "",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.encrypted": "0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.objectstore": "bluestore",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osd_id": "0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.type": "block",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.vdo": "0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.with_tpm": "0"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            },
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "type": "block",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "vg_name": "ceph_vg0"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:        }
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:    ],
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:    "1": [
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:        {
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "devices": [
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "/dev/loop4"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            ],
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_name": "ceph_lv1",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_size": "21470642176",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "name": "ceph_lv1",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "tags": {
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cluster_name": "ceph",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.crush_device_class": "",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.encrypted": "0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.objectstore": "bluestore",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osd_id": "1",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.type": "block",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.vdo": "0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.with_tpm": "0"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            },
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "type": "block",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "vg_name": "ceph_vg1"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:        }
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:    ],
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:    "2": [
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:        {
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "devices": [
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "/dev/loop5"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            ],
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_name": "ceph_lv2",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_size": "21470642176",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "name": "ceph_lv2",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "tags": {
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.cluster_name": "ceph",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.crush_device_class": "",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.encrypted": "0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.objectstore": "bluestore",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osd_id": "2",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.type": "block",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.vdo": "0",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:                "ceph.with_tpm": "0"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            },
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "type": "block",
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:            "vg_name": "ceph_vg2"
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:        }
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]:    ]
Jan 29 12:05:35 np0005601226 dazzling_ardinghelli[222556]: }
Jan 29 12:05:35 np0005601226 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 29 12:05:35 np0005601226 systemd[1]: Stopped Load Kernel Modules.
Jan 29 12:05:35 np0005601226 systemd[1]: Stopping Load Kernel Modules...
Jan 29 12:05:35 np0005601226 systemd[1]: Starting Load Kernel Modules...
Jan 29 12:05:35 np0005601226 systemd[1]: libpod-fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d.scope: Deactivated successfully.
Jan 29 12:05:35 np0005601226 podman[222490]: 2026-01-29 17:05:35.991173231 +0000 UTC m=+0.376552568 container died fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:05:35 np0005601226 systemd[1]: Finished Load Kernel Modules.
Jan 29 12:05:36 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6997e6defc1b28a85eeadebaa95f7c8ab0cc54db3123809a03a32ac4a069ab22-merged.mount: Deactivated successfully.
Jan 29 12:05:36 np0005601226 podman[222490]: 2026-01-29 17:05:36.465711312 +0000 UTC m=+0.851090579 container remove fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 12:05:36 np0005601226 systemd[1]: libpod-conmon-fbf09c13a15e71817ceb39de05045378b2710c22e929395a4d7abbd7a71b954d.scope: Deactivated successfully.
Jan 29 12:05:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:36 np0005601226 python3.9[222759]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 29 12:05:36 np0005601226 podman[222799]: 2026-01-29 17:05:36.881024291 +0000 UTC m=+0.033930138 container create 8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:05:36 np0005601226 systemd[1]: Started libpod-conmon-8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31.scope.
Jan 29 12:05:36 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:05:36 np0005601226 podman[222799]: 2026-01-29 17:05:36.947468055 +0000 UTC m=+0.100373982 container init 8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:05:36 np0005601226 podman[222799]: 2026-01-29 17:05:36.954767158 +0000 UTC m=+0.107673005 container start 8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:05:36 np0005601226 podman[222799]: 2026-01-29 17:05:36.958468712 +0000 UTC m=+0.111374589 container attach 8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:05:36 np0005601226 busy_hellman[222815]: 167 167
Jan 29 12:05:36 np0005601226 systemd[1]: libpod-8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31.scope: Deactivated successfully.
Jan 29 12:05:36 np0005601226 podman[222799]: 2026-01-29 17:05:36.959752648 +0000 UTC m=+0.112658495 container died 8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:05:36 np0005601226 podman[222799]: 2026-01-29 17:05:36.863550793 +0000 UTC m=+0.016456670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:05:36 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7cd4fd3608286d60c5b3dd5be4f99a4a3be5f902ba82eafd692ea23bf59a840b-merged.mount: Deactivated successfully.
Jan 29 12:05:36 np0005601226 podman[222799]: 2026-01-29 17:05:36.990211357 +0000 UTC m=+0.143117204 container remove 8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:05:36 np0005601226 systemd[1]: libpod-conmon-8c5696140d5c4a191fc343687d26ec25f0522e1d6262156fc43b1e09ac9b6b31.scope: Deactivated successfully.
Jan 29 12:05:37 np0005601226 podman[222840]: 2026-01-29 17:05:37.105914056 +0000 UTC m=+0.042686712 container create cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:05:37 np0005601226 systemd[1]: Started libpod-conmon-cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91.scope.
Jan 29 12:05:37 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:05:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eddc1b7160997edbffd66c3ae9e787034611718c926babd3df7ed654898efc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eddc1b7160997edbffd66c3ae9e787034611718c926babd3df7ed654898efc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eddc1b7160997edbffd66c3ae9e787034611718c926babd3df7ed654898efc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eddc1b7160997edbffd66c3ae9e787034611718c926babd3df7ed654898efc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:05:37 np0005601226 podman[222840]: 2026-01-29 17:05:37.16522729 +0000 UTC m=+0.101999726 container init cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:05:37 np0005601226 podman[222840]: 2026-01-29 17:05:37.171415914 +0000 UTC m=+0.108188320 container start cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:05:37 np0005601226 podman[222840]: 2026-01-29 17:05:37.174128239 +0000 UTC m=+0.110900645 container attach cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:05:37 np0005601226 podman[222840]: 2026-01-29 17:05:37.08562609 +0000 UTC m=+0.022398526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:05:37 np0005601226 lvm[222935]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:05:37 np0005601226 lvm[222936]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:05:37 np0005601226 lvm[222936]: VG ceph_vg1 finished
Jan 29 12:05:37 np0005601226 lvm[222935]: VG ceph_vg0 finished
Jan 29 12:05:37 np0005601226 lvm[222938]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:05:37 np0005601226 lvm[222938]: VG ceph_vg2 finished
Jan 29 12:05:37 np0005601226 amazing_tesla[222857]: {}
Jan 29 12:05:37 np0005601226 systemd[1]: libpod-cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91.scope: Deactivated successfully.
Jan 29 12:05:37 np0005601226 podman[222840]: 2026-01-29 17:05:37.834790884 +0000 UTC m=+0.771563300 container died cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:05:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1eddc1b7160997edbffd66c3ae9e787034611718c926babd3df7ed654898efc6-merged.mount: Deactivated successfully.
Jan 29 12:05:37 np0005601226 podman[222840]: 2026-01-29 17:05:37.949414492 +0000 UTC m=+0.886186918 container remove cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 12:05:37 np0005601226 systemd[1]: libpod-conmon-cd26c9021be3bf9979d16eb542085c8e009831d4fbd347ef1291e09a479d4c91.scope: Deactivated successfully.
Jan 29 12:05:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:05:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:05:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:05:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:05:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:38 np0005601226 systemd[1]: Reloading.
Jan 29 12:05:38 np0005601226 podman[222983]: 2026-01-29 17:05:38.91627892 +0000 UTC m=+0.082525733 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:05:38 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:05:38 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:05:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:05:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:05:39 np0005601226 systemd[1]: Reloading.
Jan 29 12:05:39 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:05:39 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:05:39 np0005601226 systemd-logind[823]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 29 12:05:39 np0005601226 systemd-logind[823]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 29 12:05:39 np0005601226 lvm[223121]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:05:39 np0005601226 lvm[223118]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:05:39 np0005601226 lvm[223121]: VG ceph_vg0 finished
Jan 29 12:05:39 np0005601226 lvm[223118]: VG ceph_vg2 finished
Jan 29 12:05:39 np0005601226 lvm[223120]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:05:39 np0005601226 lvm[223120]: VG ceph_vg1 finished
Jan 29 12:05:39 np0005601226 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 29 12:05:39 np0005601226 systemd[1]: Starting man-db-cache-update.service...
Jan 29 12:05:39 np0005601226 systemd[1]: Reloading.
Jan 29 12:05:39 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:05:39 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:05:39 np0005601226 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 29 12:05:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:05:40.267 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:05:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:05:40.268 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:05:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:05:40.268 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:05:40
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.meta', 'backups', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:05:40 np0005601226 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 29 12:05:40 np0005601226 systemd[1]: Finished man-db-cache-update.service.
Jan 29 12:05:40 np0005601226 systemd[1]: man-db-cache-update.service: Consumed 1.076s CPU time.
Jan 29 12:05:40 np0005601226 systemd[1]: run-r5531b8b20ce746eab64bb4917d0db192.service: Deactivated successfully.
Jan 29 12:05:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:41 np0005601226 python3.9[224477]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:05:41 np0005601226 systemd[1]: Stopping Open-iSCSI...
Jan 29 12:05:41 np0005601226 iscsid[217362]: iscsid shutting down.
Jan 29 12:05:41 np0005601226 systemd[1]: iscsid.service: Deactivated successfully.
Jan 29 12:05:41 np0005601226 systemd[1]: Stopped Open-iSCSI.
Jan 29 12:05:41 np0005601226 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 29 12:05:41 np0005601226 systemd[1]: Starting Open-iSCSI...
Jan 29 12:05:41 np0005601226 systemd[1]: Started Open-iSCSI.
Jan 29 12:05:41 np0005601226 python3.9[224633]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:05:42 np0005601226 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 29 12:05:42 np0005601226 multipathd[221305]: exit (signal)
Jan 29 12:05:42 np0005601226 multipathd[221305]: --------shut down-------
Jan 29 12:05:42 np0005601226 systemd[1]: multipathd.service: Deactivated successfully.
Jan 29 12:05:42 np0005601226 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 29 12:05:42 np0005601226 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 29 12:05:42 np0005601226 multipathd[224639]: --------start up--------
Jan 29 12:05:42 np0005601226 multipathd[224639]: read /etc/multipath.conf
Jan 29 12:05:42 np0005601226 multipathd[224639]: path checkers start up
Jan 29 12:05:42 np0005601226 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 29 12:05:42 np0005601226 podman[224770]: 2026-01-29 17:05:42.546048333 +0000 UTC m=+0.057019022 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:05:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:42 np0005601226 python3.9[224807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 29 12:05:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:43 np0005601226 python3.9[224969]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:44 np0005601226 python3.9[225121]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 12:05:44 np0005601226 systemd[1]: Reloading.
Jan 29 12:05:44 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:05:44 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:05:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:45 np0005601226 python3.9[225306]: ansible-ansible.builtin.service_facts Invoked
Jan 29 12:05:45 np0005601226 network[225323]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 29 12:05:45 np0005601226 network[225324]: 'network-scripts' will be removed from distribution in near future.
Jan 29 12:05:45 np0005601226 network[225325]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 29 12:05:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:48 np0005601226 python3.9[225598]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:48 np0005601226 python3.9[225751]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:49 np0005601226 python3.9[225904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:50 np0005601226 python3.9[226057]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:51 np0005601226 python3.9[226210]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:05:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:05:51 np0005601226 python3.9[226363]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:52 np0005601226 python3.9[226516]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:53 np0005601226 python3.9[226669]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:05:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:53 np0005601226 python3.9[226822]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:54 np0005601226 python3.9[226974]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:54 np0005601226 python3.9[227126]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:55 np0005601226 python3.9[227278]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:55 np0005601226 python3.9[227430]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:56 np0005601226 python3.9[227582]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:56 np0005601226 python3.9[227734]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:57 np0005601226 python3.9[227886]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:57 np0005601226 python3.9[228038]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:05:58 np0005601226 python3.9[228190]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:05:58 np0005601226 python3.9[228342]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:59 np0005601226 python3.9[228494]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:05:59 np0005601226 python3.9[228646]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:06:00 np0005601226 python3.9[228798]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:06:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:00 np0005601226 python3.9[228950]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:06:01 np0005601226 python3.9[229102]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:06:02 np0005601226 python3.9[229256]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:02 np0005601226 python3.9[229408]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 29 12:06:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:03 np0005601226 python3.9[229560]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 12:06:03 np0005601226 systemd[1]: Reloading.
Jan 29 12:06:03 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:06:03 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:06:04 np0005601226 python3.9[229746]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:05 np0005601226 python3.9[229899]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:05 np0005601226 python3.9[230052]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:06 np0005601226 python3.9[230205]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:06 np0005601226 python3.9[230358]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:07 np0005601226 python3.9[230511]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:07 np0005601226 python3.9[230664]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:08 np0005601226 python3.9[230817]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 29 12:06:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:09 np0005601226 podman[230942]: 2026-01-29 17:06:09.414120765 +0000 UTC m=+0.100857565 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 29 12:06:09 np0005601226 python3.9[230982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:10 np0005601226 python3.9[231147]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:06:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:06:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:06:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:06:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:06:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:06:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:10 np0005601226 python3.9[231299]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:11 np0005601226 python3.9[231451]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:11 np0005601226 python3.9[231603]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:12 np0005601226 python3.9[231755]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:12 np0005601226 podman[231879]: 2026-01-29 17:06:12.860977823 +0000 UTC m=+0.043655538 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 29 12:06:13 np0005601226 python3.9[231926]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:13 np0005601226 python3.9[232078]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:14 np0005601226 python3.9[232230]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:14 np0005601226 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 29 12:06:14 np0005601226 python3.9[232383]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:15 np0005601226 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 29 12:06:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:19 np0005601226 python3.9[232536]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 29 12:06:20 np0005601226 python3.9[232689]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 29 12:06:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:21 np0005601226 python3.9[232847]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 29 12:06:22 np0005601226 systemd-logind[823]: New session 51 of user zuul.
Jan 29 12:06:22 np0005601226 systemd[1]: Started Session 51 of User zuul.
Jan 29 12:06:22 np0005601226 systemd[1]: session-51.scope: Deactivated successfully.
Jan 29 12:06:22 np0005601226 systemd-logind[823]: Session 51 logged out. Waiting for processes to exit.
Jan 29 12:06:22 np0005601226 systemd-logind[823]: Removed session 51.
Jan 29 12:06:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:23 np0005601226 python3.9[233033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:23 np0005601226 python3.9[233154]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706382.778909-986-272952026923847/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:24 np0005601226 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 29 12:06:24 np0005601226 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 29 12:06:24 np0005601226 python3.9[233304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:24 np0005601226 python3.9[233382]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:24 np0005601226 python3.9[233532]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:25 np0005601226 python3.9[233653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706384.5180686-986-14589097072670/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:25 np0005601226 python3.9[233803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:26 np0005601226 python3.9[233924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706385.4581063-986-103896707603428/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:26 np0005601226 python3.9[234074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:27 np0005601226 python3.9[234195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706386.4005034-986-65030653378653/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:27 np0005601226 python3.9[234345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:28 np0005601226 python3.9[234466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706387.376367-986-222702034779320/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 4 op/s
Jan 29 12:06:28 np0005601226 python3.9[234618]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:06:29 np0005601226 python3.9[234770]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:06:29 np0005601226 python3.9[234922]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:06:30 np0005601226 python3.9[235074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Jan 29 12:06:31 np0005601226 python3.9[235197]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769706390.1961708-1093-74501876154738/.source _original_basename=.zrud09_h follow=False checksum=360a167d8c1dc4edf1f1b87d1d0b359441e87274 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 29 12:06:31 np0005601226 python3.9[235349]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:06:32 np0005601226 python3.9[235501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:32 np0005601226 python3.9[235622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706391.7918942-1119-14496522396643/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Jan 29 12:06:33 np0005601226 python3.9[235772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 29 12:06:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:33 np0005601226 python3.9[235893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769706392.8511255-1134-129551897469188/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 29 12:06:34 np0005601226 python3.9[236045]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 29 12:06:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 29 12:06:35 np0005601226 python3.9[236197]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 29 12:06:36 np0005601226 python3[236349]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 29 12:06:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 29 12:06:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 29 12:06:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:06:40.268 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:06:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:06:40.269 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:06:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:06:40.269 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:06:40
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:06:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 10 op/s
Jan 29 12:06:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Jan 29 12:06:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:43 np0005601226 podman[236469]: 2026-01-29 17:06:43.284485755 +0000 UTC m=+3.447996572 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:06:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Jan 29 12:06:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:48 np0005601226 podman[236497]: 2026-01-29 17:06:48.340154384 +0000 UTC m=+5.017784503 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 12:06:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:06:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:06:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:06:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:06:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:06:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:06:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:06:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:06:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:06:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:06:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:06:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:06:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:06:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:06:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:06:53 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:06:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:06:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:06:57 np0005601226 podman[236625]: 2026-01-29 17:06:57.044064491 +0000 UTC m=+0.236538861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:06:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:00 np0005601226 podman[236625]: 2026-01-29 17:07:00.10459791 +0000 UTC m=+3.297072250 container create 7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:07:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:00 np0005601226 podman[236365]: 2026-01-29 17:07:00.118650662 +0000 UTC m=+23.578586419 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 29 12:07:00 np0005601226 systemd[1]: Started libpod-conmon-7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02.scope.
Jan 29 12:07:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:00 np0005601226 podman[236625]: 2026-01-29 17:07:00.22684409 +0000 UTC m=+3.419318450 container init 7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:07:00 np0005601226 podman[236625]: 2026-01-29 17:07:00.233838565 +0000 UTC m=+3.426312905 container start 7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:07:00 np0005601226 funny_lalande[236658]: 167 167
Jan 29 12:07:00 np0005601226 systemd[1]: libpod-7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02.scope: Deactivated successfully.
Jan 29 12:07:00 np0005601226 podman[236625]: 2026-01-29 17:07:00.242549658 +0000 UTC m=+3.435023998 container attach 7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:07:00 np0005601226 podman[236625]: 2026-01-29 17:07:00.243403903 +0000 UTC m=+3.435878243 container died 7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:07:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6b67f38fac52dcad42fa92ef9bff64785ad92a600506f80eebb105084f5bd5aa-merged.mount: Deactivated successfully.
Jan 29 12:07:00 np0005601226 podman[236666]: 2026-01-29 17:07:00.275672243 +0000 UTC m=+0.075330933 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 29 12:07:00 np0005601226 podman[236625]: 2026-01-29 17:07:00.398432279 +0000 UTC m=+3.590906619 container remove 7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 12:07:00 np0005601226 systemd[1]: libpod-conmon-7567bd01b39df8a1588e4d89abe73d0e29fe47760456fc6c3eb4c6e459d19d02.scope: Deactivated successfully.
Jan 29 12:07:00 np0005601226 podman[236666]: 2026-01-29 17:07:00.415174196 +0000 UTC m=+0.214832866 container create 7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute_init)
Jan 29 12:07:00 np0005601226 python3[236349]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 29 12:07:00 np0005601226 podman[236723]: 2026-01-29 17:07:00.563955927 +0000 UTC m=+0.079449338 container create d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:07:00 np0005601226 podman[236723]: 2026-01-29 17:07:00.506771881 +0000 UTC m=+0.022265302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:07:00 np0005601226 systemd[1]: Started libpod-conmon-d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e.scope.
Jan 29 12:07:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecc6468d86909b31e8515ba6f54518d976427431b943ca52fa47e046281c077/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecc6468d86909b31e8515ba6f54518d976427431b943ca52fa47e046281c077/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecc6468d86909b31e8515ba6f54518d976427431b943ca52fa47e046281c077/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecc6468d86909b31e8515ba6f54518d976427431b943ca52fa47e046281c077/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecc6468d86909b31e8515ba6f54518d976427431b943ca52fa47e046281c077/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:00 np0005601226 podman[236723]: 2026-01-29 17:07:00.677990028 +0000 UTC m=+0.193483439 container init d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:07:00 np0005601226 podman[236723]: 2026-01-29 17:07:00.683620296 +0000 UTC m=+0.199113697 container start d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:07:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:00 np0005601226 podman[236723]: 2026-01-29 17:07:00.774356908 +0000 UTC m=+0.289850329 container attach d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:07:01 np0005601226 python3.9[236903]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:07:01 np0005601226 bold_goldstine[236769]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:07:01 np0005601226 bold_goldstine[236769]: --> All data devices are unavailable
Jan 29 12:07:01 np0005601226 systemd[1]: libpod-d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e.scope: Deactivated successfully.
Jan 29 12:07:01 np0005601226 conmon[236769]: conmon d5c86f6c8d3131a3dc46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e.scope/container/memory.events
Jan 29 12:07:01 np0005601226 podman[236919]: 2026-01-29 17:07:01.150003809 +0000 UTC m=+0.019415343 container died d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:07:01 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2ecc6468d86909b31e8515ba6f54518d976427431b943ca52fa47e046281c077-merged.mount: Deactivated successfully.
Jan 29 12:07:01 np0005601226 podman[236919]: 2026-01-29 17:07:01.448341723 +0000 UTC m=+0.317753247 container remove d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_goldstine, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:07:01 np0005601226 systemd[1]: libpod-conmon-d5c86f6c8d3131a3dc46d35190373ac4fb2d47d5b4d84dfe68ec6449360e265e.scope: Deactivated successfully.
Jan 29 12:07:01 np0005601226 podman[237147]: 2026-01-29 17:07:01.91015556 +0000 UTC m=+0.072066602 container create d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:07:01 np0005601226 podman[237147]: 2026-01-29 17:07:01.855938067 +0000 UTC m=+0.017849089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:07:01 np0005601226 python3.9[237135]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 29 12:07:01 np0005601226 systemd[1]: Started libpod-conmon-d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe.scope.
Jan 29 12:07:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:02 np0005601226 podman[237147]: 2026-01-29 17:07:02.034970902 +0000 UTC m=+0.196881904 container init d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_allen, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:07:02 np0005601226 podman[237147]: 2026-01-29 17:07:02.043071738 +0000 UTC m=+0.204982740 container start d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 12:07:02 np0005601226 interesting_allen[237163]: 167 167
Jan 29 12:07:02 np0005601226 systemd[1]: libpod-d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe.scope: Deactivated successfully.
Jan 29 12:07:02 np0005601226 podman[237147]: 2026-01-29 17:07:02.057709727 +0000 UTC m=+0.219620779 container attach d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:07:02 np0005601226 podman[237147]: 2026-01-29 17:07:02.05923868 +0000 UTC m=+0.221149702 container died d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:07:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-652748a6672465c0e75b8039bc8c40e22a3f30579bd8078ca793e7f07860dd4a-merged.mount: Deactivated successfully.
Jan 29 12:07:02 np0005601226 podman[237147]: 2026-01-29 17:07:02.147334018 +0000 UTC m=+0.309245020 container remove d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:07:02 np0005601226 systemd[1]: libpod-conmon-d5253ac404e7f6df11583cea4cb72aa23839219c545a4bb89cd4b3f994c01bbe.scope: Deactivated successfully.
Jan 29 12:07:02 np0005601226 podman[237216]: 2026-01-29 17:07:02.257975295 +0000 UTC m=+0.022459927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:07:02 np0005601226 podman[237216]: 2026-01-29 17:07:02.396988984 +0000 UTC m=+0.161473596 container create 1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 29 12:07:02 np0005601226 systemd[1]: Started libpod-conmon-1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9.scope.
Jan 29 12:07:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545c1c199e83087a7827cbedbed79b734535d05f7fed04f1d9ce4ce8702c2ecd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545c1c199e83087a7827cbedbed79b734535d05f7fed04f1d9ce4ce8702c2ecd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545c1c199e83087a7827cbedbed79b734535d05f7fed04f1d9ce4ce8702c2ecd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/545c1c199e83087a7827cbedbed79b734535d05f7fed04f1d9ce4ce8702c2ecd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:02 np0005601226 podman[237216]: 2026-01-29 17:07:02.576945236 +0000 UTC m=+0.341429848 container init 1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:07:02 np0005601226 podman[237216]: 2026-01-29 17:07:02.581795211 +0000 UTC m=+0.346279823 container start 1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_northcutt, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:07:02 np0005601226 podman[237216]: 2026-01-29 17:07:02.627338231 +0000 UTC m=+0.391822863 container attach 1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_northcutt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 12:07:02 np0005601226 python3.9[237358]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 29 12:07:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]: {
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:    "0": [
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:        {
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "devices": [
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "/dev/loop3"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            ],
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_name": "ceph_lv0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_size": "21470642176",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "name": "ceph_lv0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "tags": {
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cluster_name": "ceph",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.crush_device_class": "",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.encrypted": "0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.objectstore": "bluestore",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osd_id": "0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.type": "block",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.vdo": "0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.with_tpm": "0"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            },
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "type": "block",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "vg_name": "ceph_vg0"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:        }
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:    ],
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:    "1": [
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:        {
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "devices": [
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "/dev/loop4"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            ],
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_name": "ceph_lv1",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_size": "21470642176",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "name": "ceph_lv1",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "tags": {
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cluster_name": "ceph",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.crush_device_class": "",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.encrypted": "0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.objectstore": "bluestore",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osd_id": "1",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.type": "block",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.vdo": "0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.with_tpm": "0"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            },
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "type": "block",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "vg_name": "ceph_vg1"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:        }
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:    ],
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:    "2": [
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:        {
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "devices": [
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "/dev/loop5"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            ],
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_name": "ceph_lv2",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_size": "21470642176",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "name": "ceph_lv2",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "tags": {
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.cluster_name": "ceph",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.crush_device_class": "",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.encrypted": "0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.objectstore": "bluestore",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osd_id": "2",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.type": "block",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.vdo": "0",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:                "ceph.with_tpm": "0"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            },
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "type": "block",
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:            "vg_name": "ceph_vg2"
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:        }
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]:    ]
Jan 29 12:07:02 np0005601226 frosty_northcutt[237351]: }
Jan 29 12:07:02 np0005601226 systemd[1]: libpod-1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9.scope: Deactivated successfully.
Jan 29 12:07:02 np0005601226 podman[237216]: 2026-01-29 17:07:02.88173303 +0000 UTC m=+0.646217652 container died 1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:07:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-545c1c199e83087a7827cbedbed79b734535d05f7fed04f1d9ce4ce8702c2ecd-merged.mount: Deactivated successfully.
Jan 29 12:07:02 np0005601226 podman[237216]: 2026-01-29 17:07:02.944012268 +0000 UTC m=+0.708496880 container remove 1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 12:07:02 np0005601226 systemd[1]: libpod-conmon-1411a1e55775118db37d8eb692a8f322f0aa5bab2a87678ffea8b10c987dded9.scope: Deactivated successfully.
Jan 29 12:07:03 np0005601226 podman[237590]: 2026-01-29 17:07:03.336277723 +0000 UTC m=+0.052257999 container create 37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:07:03 np0005601226 systemd[1]: Started libpod-conmon-37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431.scope.
Jan 29 12:07:03 np0005601226 podman[237590]: 2026-01-29 17:07:03.308081167 +0000 UTC m=+0.024061463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:07:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:03 np0005601226 podman[237590]: 2026-01-29 17:07:03.435501742 +0000 UTC m=+0.151482018 container init 37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:07:03 np0005601226 podman[237590]: 2026-01-29 17:07:03.439608617 +0000 UTC m=+0.155588893 container start 37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 12:07:03 np0005601226 sweet_cray[237607]: 167 167
Jan 29 12:07:03 np0005601226 systemd[1]: libpod-37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431.scope: Deactivated successfully.
Jan 29 12:07:03 np0005601226 podman[237590]: 2026-01-29 17:07:03.461267811 +0000 UTC m=+0.177248297 container attach 37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:07:03 np0005601226 podman[237590]: 2026-01-29 17:07:03.461782695 +0000 UTC m=+0.177762981 container died 37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 12:07:03 np0005601226 python3[237577]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 29 12:07:03 np0005601226 systemd[1]: var-lib-containers-storage-overlay-932922579c26a71faab51f945affc3df508bd354c4e3ef7e1d750c42db200847-merged.mount: Deactivated successfully.
Jan 29 12:07:03 np0005601226 podman[237590]: 2026-01-29 17:07:03.587703518 +0000 UTC m=+0.303683794 container remove 37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_cray, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:07:03 np0005601226 systemd[1]: libpod-conmon-37dd2f22a057646e70501d5a89ebacc33166552bf573ec7d46995f7529d3f431.scope: Deactivated successfully.
Jan 29 12:07:03 np0005601226 podman[237664]: 2026-01-29 17:07:03.775407056 +0000 UTC m=+0.116226294 container create b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible)
Jan 29 12:07:03 np0005601226 podman[237664]: 2026-01-29 17:07:03.680746045 +0000 UTC m=+0.021565313 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 29 12:07:03 np0005601226 python3[237577]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 29 12:07:03 np0005601226 podman[237674]: 2026-01-29 17:07:03.738127836 +0000 UTC m=+0.068986455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:07:03 np0005601226 podman[237674]: 2026-01-29 17:07:03.798724607 +0000 UTC m=+0.129583206 container create 97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mayer, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:07:03 np0005601226 systemd[1]: Started libpod-conmon-97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412.scope.
Jan 29 12:07:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5543014b2db02f3a1fed9e7bfad10faa9bbed1dd58ea2c71ce193c14361c013d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5543014b2db02f3a1fed9e7bfad10faa9bbed1dd58ea2c71ce193c14361c013d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5543014b2db02f3a1fed9e7bfad10faa9bbed1dd58ea2c71ce193c14361c013d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5543014b2db02f3a1fed9e7bfad10faa9bbed1dd58ea2c71ce193c14361c013d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:03 np0005601226 podman[237674]: 2026-01-29 17:07:03.919696643 +0000 UTC m=+0.250555242 container init 97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:07:03 np0005601226 podman[237674]: 2026-01-29 17:07:03.927622984 +0000 UTC m=+0.258481583 container start 97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 29 12:07:03 np0005601226 podman[237674]: 2026-01-29 17:07:03.946975424 +0000 UTC m=+0.277834043 container attach 97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:07:04 np0005601226 python3.9[237893]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:07:04 np0005601226 lvm[237978]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:07:04 np0005601226 lvm[237979]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:07:04 np0005601226 lvm[237978]: VG ceph_vg0 finished
Jan 29 12:07:04 np0005601226 lvm[237979]: VG ceph_vg1 finished
Jan 29 12:07:04 np0005601226 lvm[237982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:07:04 np0005601226 lvm[237982]: VG ceph_vg2 finished
Jan 29 12:07:04 np0005601226 suspicious_mayer[237718]: {}
Jan 29 12:07:04 np0005601226 systemd[1]: libpod-97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412.scope: Deactivated successfully.
Jan 29 12:07:04 np0005601226 systemd[1]: libpod-97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412.scope: Consumed 1.052s CPU time.
Jan 29 12:07:04 np0005601226 podman[237674]: 2026-01-29 17:07:04.688595598 +0000 UTC m=+1.019454227 container died 97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mayer, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 12:07:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-5543014b2db02f3a1fed9e7bfad10faa9bbed1dd58ea2c71ce193c14361c013d-merged.mount: Deactivated successfully.
Jan 29 12:07:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:04 np0005601226 podman[237674]: 2026-01-29 17:07:04.737756109 +0000 UTC m=+1.068614708 container remove 97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mayer, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:07:04 np0005601226 systemd[1]: libpod-conmon-97e3b6b35dc3518d60d0885a1fda9a18737d3c0626412b9d6edd07c03ff47412.scope: Deactivated successfully.
Jan 29 12:07:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:07:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:07:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:07:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:07:05 np0005601226 python3.9[238147]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:07:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:05 np0005601226 python3.9[238298]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769706425.1419973-1230-107130034552082/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 29 12:07:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:07:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:07:06 np0005601226 python3.9[238374]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 29 12:07:06 np0005601226 systemd[1]: Reloading.
Jan 29 12:07:06 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:07:06 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:07:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:06 np0005601226 python3.9[238485]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 29 12:07:06 np0005601226 systemd[1]: Reloading.
Jan 29 12:07:07 np0005601226 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 29 12:07:07 np0005601226 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 29 12:07:07 np0005601226 systemd[1]: Starting nova_compute container...
Jan 29 12:07:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:07 np0005601226 podman[238525]: 2026-01-29 17:07:07.36186287 +0000 UTC m=+0.151633012 container init b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 29 12:07:07 np0005601226 podman[238525]: 2026-01-29 17:07:07.367185128 +0000 UTC m=+0.156955270 container start b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + sudo -E kolla_set_configs
Jan 29 12:07:07 np0005601226 podman[238525]: nova_compute
Jan 29 12:07:07 np0005601226 systemd[1]: Started nova_compute container.
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Validating config file
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying service configuration files
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Deleting /etc/ceph
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Creating directory /etc/ceph
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/ceph
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Writing out command to execute
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:07 np0005601226 nova_compute[238540]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 12:07:07 np0005601226 nova_compute[238540]: ++ cat /run_command
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + CMD=nova-compute
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + ARGS=
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + sudo kolla_copy_cacerts
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + [[ ! -n '' ]]
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + . kolla_extend_start
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + echo 'Running command: '\''nova-compute'\'''
Jan 29 12:07:07 np0005601226 nova_compute[238540]: Running command: 'nova-compute'
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + umask 0022
Jan 29 12:07:07 np0005601226 nova_compute[238540]: + exec nova-compute
Jan 29 12:07:08 np0005601226 python3.9[238701]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:07:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:08 np0005601226 python3.9[238852]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:07:09 np0005601226 python3.9[239002]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 29 12:07:09 np0005601226 nova_compute[238540]: 2026-01-29 17:07:09.523 238544 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 29 12:07:09 np0005601226 nova_compute[238540]: 2026-01-29 17:07:09.524 238544 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 29 12:07:09 np0005601226 nova_compute[238540]: 2026-01-29 17:07:09.524 238544 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 29 12:07:09 np0005601226 nova_compute[238540]: 2026-01-29 17:07:09.524 238544 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 29 12:07:09 np0005601226 nova_compute[238540]: 2026-01-29 17:07:09.707 238544 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:07:09 np0005601226 nova_compute[238540]: 2026-01-29 17:07:09.729 238544 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:07:09 np0005601226 nova_compute[238540]: 2026-01-29 17:07:09.730 238544 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 29 12:07:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.337 238544 INFO nova.virt.driver [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 29 12:07:10 np0005601226 python3.9[239158]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 29 12:07:10 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:07:10 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:07:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:07:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:07:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:07:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:07:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:07:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.615 238544 INFO nova.compute.provider_config [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.630 238544 DEBUG oslo_concurrency.lockutils [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.630 238544 DEBUG oslo_concurrency.lockutils [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.630 238544 DEBUG oslo_concurrency.lockutils [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.630 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.631 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.631 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.631 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.631 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.631 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.631 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.631 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.632 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.632 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.632 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.632 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.632 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.632 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.632 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.633 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.634 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.634 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.634 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.634 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.634 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.634 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.635 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.635 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.635 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.635 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.635 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.635 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.635 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.636 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.636 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.636 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.636 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.636 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.636 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.637 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.637 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.637 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.637 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.637 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.637 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.637 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.638 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.638 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.638 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.638 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.638 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.638 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.638 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.639 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.639 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.639 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.639 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.639 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.639 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.639 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.640 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.641 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.641 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.641 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.641 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.641 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.641 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.641 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.642 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.642 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.642 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.642 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.642 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.642 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.642 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.643 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.643 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.643 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.643 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.643 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.643 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.643 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.644 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.644 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.644 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.644 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.644 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.645 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.645 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.645 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.645 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.645 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.645 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.645 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.646 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.646 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.646 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.646 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.646 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.646 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.646 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.647 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.648 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.648 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.648 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.648 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.648 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.648 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.648 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.649 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.650 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.650 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.650 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.650 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.650 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.650 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.650 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.651 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.651 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.651 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.651 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.651 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.651 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.652 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.652 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.652 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.652 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.652 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.653 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.653 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.653 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.653 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.653 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.653 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.653 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.654 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.655 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.655 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.655 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.655 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.655 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.655 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.655 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.656 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.656 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.656 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.656 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.656 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.656 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.657 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.658 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.658 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.658 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.658 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.658 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.658 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.658 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.659 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.659 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.659 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.659 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.659 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.659 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.659 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.660 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.660 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.660 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.660 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.660 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.660 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.660 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.661 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.661 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.661 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.661 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.661 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.661 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.661 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.662 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.662 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.662 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.662 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.662 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.662 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.662 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.663 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.663 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.663 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.663 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.663 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.663 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.663 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.664 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.665 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.665 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.665 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.665 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.665 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.665 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.666 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.666 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.666 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.666 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.666 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.666 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.666 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.667 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.668 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.668 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.668 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.668 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.668 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.668 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.668 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.669 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.669 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.669 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.669 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.669 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.669 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.670 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.670 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.670 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.670 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.670 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.670 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.670 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.671 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.671 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.671 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.671 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.671 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.671 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.672 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.672 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.672 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.672 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.672 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.672 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.672 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.673 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.673 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.673 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.673 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.673 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.673 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.673 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.674 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.674 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.674 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.674 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.674 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.674 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.674 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.675 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.675 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.675 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.675 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.675 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.675 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.675 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.676 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.677 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.677 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.677 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.677 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.677 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.677 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.677 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.678 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.678 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.678 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.678 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.678 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.678 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.679 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.679 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.679 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.679 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.679 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.679 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.679 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.680 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.680 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.680 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.680 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.680 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.680 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.681 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.681 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.681 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.681 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.681 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.681 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.682 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.682 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.682 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.682 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.682 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.683 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.683 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.683 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.683 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.683 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.683 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.683 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.684 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.684 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.684 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.684 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.684 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.684 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.685 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.685 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.685 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.685 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.685 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.685 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.686 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.686 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.686 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.686 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.686 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.686 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.687 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.687 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.687 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.687 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.687 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.687 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.688 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.688 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.688 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.688 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.688 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.689 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.689 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.689 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.689 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.689 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.690 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.690 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.690 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.690 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.690 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.690 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.690 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.691 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.691 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.691 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.691 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.691 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.691 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.691 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.692 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.692 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.692 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.692 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.692 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.692 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.692 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.693 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.693 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.693 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.693 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.693 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.693 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.693 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.694 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.694 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.694 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.694 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.694 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.694 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.694 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.695 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.695 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.695 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.695 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.695 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.695 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.695 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.696 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.696 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.696 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.696 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.696 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.696 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.697 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.697 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.697 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.697 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.697 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.697 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.697 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.698 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.698 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.698 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.698 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.698 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.699 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.699 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.699 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.699 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.699 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.699 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.700 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.700 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.700 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.700 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.701 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.701 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.701 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.702 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.702 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.702 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.702 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.702 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.703 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.703 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.703 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.703 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.703 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.703 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.704 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.704 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.704 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.704 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.705 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.705 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.705 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.705 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.705 238544 WARNING oslo_config.cfg [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 29 12:07:10 np0005601226 nova_compute[238540]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 29 12:07:10 np0005601226 nova_compute[238540]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 29 12:07:10 np0005601226 nova_compute[238540]: and ``live_migration_inbound_addr`` respectively.
Jan 29 12:07:10 np0005601226 nova_compute[238540]: ).  Its value may be silently ignored in the future.#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.706 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.706 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.706 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.706 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.706 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.706 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.707 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.707 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.707 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.707 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.707 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.708 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.708 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.708 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.708 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.708 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.708 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.709 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.709 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rbd_secret_uuid        = cc5c72e3-31e0-58b9-8731-456117d38f4a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.709 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.709 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.709 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.709 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.710 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.710 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.710 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.710 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.710 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.710 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.711 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.711 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.711 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.711 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.711 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.711 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.712 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.712 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.712 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.712 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.712 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.712 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.713 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.713 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.713 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.713 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.713 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.713 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.714 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.714 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.714 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.714 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.714 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.715 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.715 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.715 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.715 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.715 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.715 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.715 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.716 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.716 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.716 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.716 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.716 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.716 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.717 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.717 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.717 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.717 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.717 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.717 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.718 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.718 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.718 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.719 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.719 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.719 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.720 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.720 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.720 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.720 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.720 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.721 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.721 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.721 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.721 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.721 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.721 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.722 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.722 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.722 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.722 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.722 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.722 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.722 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.723 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.724 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.724 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.724 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.724 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.724 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.725 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.725 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.725 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.725 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.725 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.726 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.726 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.726 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.726 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.726 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.726 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.726 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.727 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.727 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.727 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.727 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.727 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.727 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.727 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.728 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.728 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.728 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.728 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.728 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.728 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.728 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.729 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.729 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.729 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.729 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.729 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.729 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.730 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.730 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.730 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.730 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.730 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.731 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.731 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.731 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.731 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.731 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.732 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.732 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.732 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.732 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.732 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.732 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.732 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.733 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.733 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.733 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.733 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.733 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.733 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.733 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.734 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.734 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.734 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.734 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.734 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.734 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.734 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.735 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.736 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.736 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.736 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.736 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.736 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.736 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.737 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.738 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.738 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.738 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.738 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.738 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.738 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.739 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.740 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.741 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.741 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.741 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.741 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.741 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.741 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.742 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.742 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.742 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.742 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.742 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.743 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.743 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.743 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.743 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.743 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.743 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.743 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.744 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.745 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.745 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.745 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.745 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.745 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.745 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.745 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.746 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.746 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.746 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.746 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.746 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.747 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.747 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.747 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.747 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.747 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.747 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.747 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.748 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.748 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.748 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.748 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.748 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.749 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.749 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.749 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.749 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.749 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.749 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.750 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.750 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.750 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.750 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.750 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.750 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.750 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.751 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.752 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.752 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.752 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.752 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.752 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.752 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.752 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.753 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.753 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.753 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.753 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.753 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.753 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.753 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.754 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.754 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.754 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.754 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.754 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.754 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.754 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.755 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.756 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.756 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.756 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.756 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.756 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.757 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.757 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.757 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.757 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.757 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.758 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.759 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.759 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.759 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.759 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.759 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.759 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.759 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.760 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.760 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.760 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.760 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.760 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.760 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.760 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.761 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.761 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.761 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.761 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.761 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.761 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.761 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.762 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.763 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.763 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.763 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.763 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.763 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.763 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.764 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.764 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.764 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.764 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.764 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.765 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.765 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.765 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.765 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.765 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.765 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.766 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.766 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.766 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.766 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.766 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.766 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.767 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.768 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.768 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.768 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.768 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.768 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.768 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.768 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.769 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.769 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.769 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.769 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.769 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.769 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.770 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.770 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.770 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.770 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.770 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.771 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.771 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.771 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.771 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.771 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.771 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.772 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.772 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.772 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.772 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.772 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.772 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.772 238544 DEBUG oslo_service.service [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.774 238544 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.787 238544 DEBUG nova.virt.libvirt.host [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.788 238544 DEBUG nova.virt.libvirt.host [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.788 238544 DEBUG nova.virt.libvirt.host [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.788 238544 DEBUG nova.virt.libvirt.host [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 29 12:07:10 np0005601226 systemd[1]: Starting libvirt QEMU daemon...
Jan 29 12:07:10 np0005601226 systemd[1]: Started libvirt QEMU daemon.
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.855 238544 DEBUG nova.virt.libvirt.host [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f29d619ae20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.858 238544 DEBUG nova.virt.libvirt.host [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f29d619ae20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.859 238544 INFO nova.virt.libvirt.driver [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.873 238544 WARNING nova.virt.libvirt.driver [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 29 12:07:10 np0005601226 nova_compute[238540]: 2026-01-29 17:07:10.874 238544 DEBUG nova.virt.libvirt.volume.mount [None req-7ef495d6-95e0-42bc-9626-a02a1dff8deb - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 29 12:07:11 np0005601226 python3.9[239375]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 29 12:07:11 np0005601226 systemd[1]: Stopping nova_compute container...
Jan 29 12:07:11 np0005601226 nova_compute[238540]: 2026-01-29 17:07:11.275 238544 DEBUG oslo_concurrency.lockutils [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:07:11 np0005601226 nova_compute[238540]: 2026-01-29 17:07:11.275 238544 DEBUG oslo_concurrency.lockutils [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:07:11 np0005601226 nova_compute[238540]: 2026-01-29 17:07:11.275 238544 DEBUG oslo_concurrency.lockutils [None req-0aab4bd9-9609-494e-9cb3-100633bba7fc - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:07:12 np0005601226 virtqemud[239322]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 29 12:07:12 np0005601226 virtqemud[239322]: hostname: compute-0
Jan 29 12:07:12 np0005601226 virtqemud[239322]: End of file while reading data: Input/output error
Jan 29 12:07:12 np0005601226 systemd[1]: libpod-b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78.scope: Deactivated successfully.
Jan 29 12:07:12 np0005601226 systemd[1]: libpod-b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78.scope: Consumed 2.873s CPU time.
Jan 29 12:07:12 np0005601226 podman[239389]: 2026-01-29 17:07:12.023840504 +0000 UTC m=+0.802325309 container died b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 12:07:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78-userdata-shm.mount: Deactivated successfully.
Jan 29 12:07:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8-merged.mount: Deactivated successfully.
Jan 29 12:07:12 np0005601226 podman[239389]: 2026-01-29 17:07:12.09289756 +0000 UTC m=+0.871382355 container cleanup b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0)
Jan 29 12:07:12 np0005601226 podman[239389]: nova_compute
Jan 29 12:07:12 np0005601226 podman[239427]: nova_compute
Jan 29 12:07:12 np0005601226 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 29 12:07:12 np0005601226 systemd[1]: Stopped nova_compute container.
Jan 29 12:07:12 np0005601226 systemd[1]: Starting nova_compute container...
Jan 29 12:07:12 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ed11114ffecd48018415de3248faf0d3baf5df291b755905572026a5a968c8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:12 np0005601226 podman[239440]: 2026-01-29 17:07:12.41829691 +0000 UTC m=+0.244069761 container init b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:07:12 np0005601226 podman[239440]: 2026-01-29 17:07:12.427647431 +0000 UTC m=+0.253420292 container start b1255fa50d6c90ec441961774330d1c6edb6cd100742be1055723eced31f2a78 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 29 12:07:12 np0005601226 podman[239440]: nova_compute
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + sudo -E kolla_set_configs
Jan 29 12:07:12 np0005601226 systemd[1]: Started nova_compute container.
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Validating config file
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying service configuration files
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /etc/ceph
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Creating directory /etc/ceph
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/ceph
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Writing out command to execute
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:12 np0005601226 nova_compute[239456]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 29 12:07:12 np0005601226 nova_compute[239456]: ++ cat /run_command
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + CMD=nova-compute
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + ARGS=
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + sudo kolla_copy_cacerts
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + [[ ! -n '' ]]
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + . kolla_extend_start
Jan 29 12:07:12 np0005601226 nova_compute[239456]: Running command: 'nova-compute'
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + echo 'Running command: '\''nova-compute'\'''
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + umask 0022
Jan 29 12:07:12 np0005601226 nova_compute[239456]: + exec nova-compute
Jan 29 12:07:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:13 np0005601226 python3.9[239619]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 29 12:07:13 np0005601226 systemd[1]: Started libpod-conmon-7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7.scope.
Jan 29 12:07:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:07:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49f1ce6cc7a6c3fceb173d284b2122be9bcda5417caaf3df63bf77903cd1f8e/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49f1ce6cc7a6c3fceb173d284b2122be9bcda5417caaf3df63bf77903cd1f8e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49f1ce6cc7a6c3fceb173d284b2122be9bcda5417caaf3df63bf77903cd1f8e/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 29 12:07:13 np0005601226 podman[239644]: 2026-01-29 17:07:13.295485566 +0000 UTC m=+0.129861504 container init 7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 29 12:07:13 np0005601226 podman[239644]: 2026-01-29 17:07:13.301065912 +0000 UTC m=+0.135441840 container start 7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:07:13 np0005601226 python3.9[239619]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Applying nova statedir ownership
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 29 12:07:13 np0005601226 nova_compute_init[239666]: INFO:nova_statedir:Nova statedir ownership complete
Jan 29 12:07:13 np0005601226 systemd[1]: libpod-7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7.scope: Deactivated successfully.
Jan 29 12:07:13 np0005601226 podman[239667]: 2026-01-29 17:07:13.357371953 +0000 UTC m=+0.021420688 container died 7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:07:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7-userdata-shm.mount: Deactivated successfully.
Jan 29 12:07:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d49f1ce6cc7a6c3fceb173d284b2122be9bcda5417caaf3df63bf77903cd1f8e-merged.mount: Deactivated successfully.
Jan 29 12:07:13 np0005601226 podman[239677]: 2026-01-29 17:07:13.477609278 +0000 UTC m=+0.123015213 container cleanup 7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:07:13 np0005601226 systemd[1]: libpod-conmon-7aa9b96aa0fa41c107dfe4d765215fee56a1fe70348b8577f21642528d2ff7f7.scope: Deactivated successfully.
Jan 29 12:07:13 np0005601226 systemd[1]: session-50.scope: Deactivated successfully.
Jan 29 12:07:13 np0005601226 systemd[1]: session-50.scope: Consumed 1min 39.922s CPU time.
Jan 29 12:07:13 np0005601226 systemd-logind[823]: Session 50 logged out. Waiting for processes to exit.
Jan 29 12:07:13 np0005601226 systemd-logind[823]: Removed session 50.
Jan 29 12:07:14 np0005601226 nova_compute[239456]: 2026-01-29 17:07:14.416 239460 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 29 12:07:14 np0005601226 nova_compute[239456]: 2026-01-29 17:07:14.418 239460 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 29 12:07:14 np0005601226 nova_compute[239456]: 2026-01-29 17:07:14.418 239460 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 29 12:07:14 np0005601226 nova_compute[239456]: 2026-01-29 17:07:14.418 239460 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 29 12:07:14 np0005601226 nova_compute[239456]: 2026-01-29 17:07:14.586 239460 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:07:14 np0005601226 nova_compute[239456]: 2026-01-29 17:07:14.594 239460 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:07:14 np0005601226 nova_compute[239456]: 2026-01-29 17:07:14.595 239460 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 29 12:07:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.089 239460 INFO nova.virt.driver [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.125305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706435125406, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1550, "num_deletes": 252, "total_data_size": 2633384, "memory_usage": 2684280, "flush_reason": "Manual Compaction"}
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706435132917, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1504007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11914, "largest_seqno": 13463, "table_properties": {"data_size": 1498763, "index_size": 2512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13281, "raw_average_key_size": 20, "raw_value_size": 1487185, "raw_average_value_size": 2253, "num_data_blocks": 116, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769706261, "oldest_key_time": 1769706261, "file_creation_time": 1769706435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 7634 microseconds, and 3349 cpu microseconds.
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.132999) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1504007 bytes OK
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.133020) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.134859) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.134875) EVENT_LOG_v1 {"time_micros": 1769706435134870, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.134893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2626650, prev total WAL file size 2626650, number of live WAL files 2.
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.135655) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1468KB)], [29(8632KB)]
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706435135729, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 10344122, "oldest_snapshot_seqno": -1}
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.183 239460 INFO nova.compute.provider_config [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.197 239460 DEBUG oslo_concurrency.lockutils [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.197 239460 DEBUG oslo_concurrency.lockutils [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.198 239460 DEBUG oslo_concurrency.lockutils [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.198 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.198 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.198 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.199 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.199 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.199 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.199 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.199 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.199 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.199 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.200 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.200 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.200 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.200 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.200 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.200 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.200 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.201 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.201 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.201 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.201 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.201 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.201 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.201 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.202 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.202 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.202 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.202 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.202 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.202 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.203 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.203 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.203 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.203 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.203 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.203 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.204 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.204 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.204 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.204 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.204 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.204 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.205 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.205 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.205 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.205 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.205 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.205 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.206 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.206 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.206 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.206 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.206 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.206 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.207 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.207 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.207 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.207 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.207 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.207 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.207 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.208 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.209 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.209 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.209 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.209 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.209 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.209 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.210 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.210 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.210 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.210 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.210 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.210 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.211 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.211 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.211 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.211 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.211 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.211 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.212 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.212 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.212 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.212 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.212 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.212 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.212 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.213 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.213 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.213 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.213 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.213 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.213 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.214 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.214 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.214 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.214 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.214 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.214 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.215 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.215 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.215 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.215 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.215 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.215 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.215 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.216 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.216 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.216 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.216 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.216 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.216 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.216 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.217 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.218 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.218 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.218 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.218 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.218 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.218 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.218 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.219 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.219 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.219 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.219 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.219 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.219 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.219 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.220 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.220 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.220 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.220 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.220 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.220 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.220 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.221 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.221 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.221 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.221 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.221 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.221 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.221 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.222 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.222 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.222 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.222 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.222 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.222 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.222 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.223 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.223 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.223 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.223 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.223 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.223 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.223 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.224 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.224 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.224 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.224 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.224 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.224 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.225 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.225 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.225 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.225 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.225 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.225 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.225 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.226 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.226 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.226 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.226 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.226 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.226 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.226 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.227 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.228 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.228 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.228 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.228 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.228 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.228 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.228 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.229 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.229 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.229 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.229 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.229 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.229 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.229 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.230 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.230 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.230 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.230 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.230 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.230 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.230 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.231 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.232 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.232 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.232 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.232 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.232 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.232 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.232 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.233 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.233 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.233 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.233 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.233 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.233 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.233 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.234 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.234 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.234 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.234 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.234 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.234 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.235 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.236 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.236 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.236 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.236 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.236 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.236 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.236 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.237 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.237 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.237 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.237 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.237 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.238 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.239 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.239 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.239 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.239 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.239 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.239 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.239 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.240 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.240 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.240 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.240 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.240 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.240 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.240 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.241 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.241 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.241 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.241 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4024 keys, 8000666 bytes, temperature: kUnknown
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706435241650, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 8000666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7971700, "index_size": 17784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 96338, "raw_average_key_size": 23, "raw_value_size": 7897105, "raw_average_value_size": 1962, "num_data_blocks": 769, "num_entries": 4024, "num_filter_entries": 4024, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769706435, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.241 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.241 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.241 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.242 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.242 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.242 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.242 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.242 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.242 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.242 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.243 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.243 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.243 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.243 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.243 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.243 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.243 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.244 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.244 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.244 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.244 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.244 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.244 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.244 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.245 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.245 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.245 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.245 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.245 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.245 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.246 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.247 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.247 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.247 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.247 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.247 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.247 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.247 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.248 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.248 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.248 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.248 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.248 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.248 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.248 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.249 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.249 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.249 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.249 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.249 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.249 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.249 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.250 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.250 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.250 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.250 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.250 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.251 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.251 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.251 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.251 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.251 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.251 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.251 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.252 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.252 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.252 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.252 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.252 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.252 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.252 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.253 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.254 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.254 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.254 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.254 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.254 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.254 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.254 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.255 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.255 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.255 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.255 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.255 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.255 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.256 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.256 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.256 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.256 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.256 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.256 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.257 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.257 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.257 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.257 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.257 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.257 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.258 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.258 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.258 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.258 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.258 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.258 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.258 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.259 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.259 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.259 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.259 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.259 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.259 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.260 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.260 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.260 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.260 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.260 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.241919) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 8000666 bytes
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.260 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.260776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.6 rd, 75.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(12.2) write-amplify(5.3) OK, records in: 4461, records dropped: 437 output_compression: NoCompression
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.260820) EVENT_LOG_v1 {"time_micros": 1769706435260802, "job": 12, "event": "compaction_finished", "compaction_time_micros": 105999, "compaction_time_cpu_micros": 17393, "output_level": 6, "num_output_files": 1, "total_output_size": 8000666, "num_input_records": 4461, "num_output_records": 4024, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.261 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.261 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706435261412, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.261 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.261 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.261 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.261 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.262 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.262 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.262 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706435262580, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.135335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.262677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.262681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.262683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.262684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:07:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:07:15.262686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.262 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.262 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.263 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.263 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.263 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.263 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.263 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.263 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.264 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.264 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.264 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.264 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.264 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.264 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.265 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.265 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.265 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.265 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.265 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.265 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.265 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.266 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.266 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.266 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.266 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.266 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.266 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.266 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.267 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.268 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.268 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.268 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.268 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.268 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.268 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.268 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.269 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.269 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.269 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.269 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.269 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.270 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.270 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.270 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.270 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.270 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.270 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.271 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.271 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.271 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.271 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.271 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.271 239460 WARNING oslo_config.cfg [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 29 12:07:15 np0005601226 nova_compute[239456]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 29 12:07:15 np0005601226 nova_compute[239456]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 29 12:07:15 np0005601226 nova_compute[239456]: and ``live_migration_inbound_addr`` respectively.
Jan 29 12:07:15 np0005601226 nova_compute[239456]: ).  Its value may be silently ignored in the future.#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.272 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.272 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.272 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.272 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.272 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.272 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.273 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.273 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.273 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.273 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.273 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.273 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.274 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.274 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.274 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.274 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.274 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.274 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rbd_secret_uuid        = cc5c72e3-31e0-58b9-8731-456117d38f4a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.275 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.276 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.276 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.276 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.276 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.276 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.276 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.277 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.277 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.277 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.277 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.277 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.277 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.277 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.278 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.278 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.278 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.278 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.278 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.278 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.279 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.279 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.279 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.279 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.279 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.279 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.280 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.280 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.280 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.280 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.280 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.281 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.281 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.281 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.281 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.281 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.281 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.282 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.282 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.282 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.282 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.282 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.282 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.283 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.283 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.283 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.284 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.284 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.284 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.284 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.284 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.285 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.285 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.285 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.285 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.285 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.286 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.286 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.286 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.286 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.286 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.287 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.287 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.287 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.287 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.287 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.288 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.288 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.288 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.288 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.288 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.288 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.289 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.289 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.289 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.289 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.289 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.290 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.290 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.290 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.290 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.290 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.290 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.291 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.291 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.291 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.291 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.291 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.292 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.292 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.292 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.292 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.292 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.292 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.293 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.293 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.293 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.293 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.293 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.293 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.294 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.294 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.294 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.294 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.294 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.295 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.295 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.295 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.295 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.295 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.296 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.296 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.296 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.296 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.297 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.297 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.297 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.297 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.297 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.298 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.298 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.298 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.298 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.298 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.299 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.299 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.299 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.299 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.299 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.300 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.300 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.300 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.300 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.300 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.300 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.301 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.301 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.301 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.301 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.301 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.301 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.302 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.302 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.302 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.302 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.302 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.302 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.302 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.303 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.303 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.303 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.303 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.303 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.303 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.304 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.304 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.304 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.304 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.304 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.305 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.305 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.305 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.305 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.305 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.305 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.306 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.306 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.306 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.306 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.306 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.306 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.307 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.307 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.307 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.307 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.307 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.308 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.308 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.308 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.308 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.308 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.308 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.308 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.309 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.309 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.309 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.309 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.309 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.309 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.309 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.310 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.310 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.310 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.310 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.310 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.310 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.310 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.311 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.311 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.311 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.311 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.311 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.311 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.311 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.312 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.312 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.312 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.312 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.312 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.312 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.312 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.313 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.313 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.313 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.313 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.313 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.313 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.313 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.314 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.314 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.314 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.314 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.314 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.314 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.315 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.315 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.315 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.315 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.315 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.316 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.316 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.316 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.316 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.316 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.316 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.316 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.317 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.317 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.317 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.317 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.317 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.317 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.317 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.318 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.318 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.318 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.318 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.318 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.318 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.318 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.319 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.319 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.319 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.319 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.319 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.319 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.320 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.320 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.320 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.320 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.320 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.320 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.320 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.321 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.321 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.321 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.321 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.321 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.321 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.322 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.322 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.322 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.322 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.322 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.322 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.322 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.323 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.323 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.323 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.323 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.323 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.323 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.323 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.324 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.324 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.324 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.324 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.324 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.324 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.325 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.325 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.325 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.325 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.325 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.325 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.325 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.326 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.326 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.326 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.326 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.326 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.326 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.326 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.327 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.327 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.327 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.327 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.327 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.327 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.327 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.328 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.328 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.328 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.328 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.328 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.328 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.329 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.329 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.329 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.329 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.329 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.329 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.329 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.330 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.331 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.331 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.331 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.331 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.331 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.331 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.332 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.332 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.332 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.332 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.332 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.332 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.332 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.333 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.333 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.333 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.333 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.333 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.333 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.333 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.334 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.335 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.335 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.335 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.335 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.335 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.335 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.335 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.336 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.336 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.336 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.336 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.336 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.336 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.337 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.337 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.337 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.337 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.337 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.337 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.337 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.338 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.338 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.338 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.338 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.338 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.338 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.339 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.340 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.340 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.340 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.340 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.340 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.340 239460 DEBUG oslo_service.service [None req-4449c970-7abb-4188-b897-ac1212bc6613 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.341 239460 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.356 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.357 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.357 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.357 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.369 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7eff02337460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.372 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7eff02337460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.372 239460 INFO nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.377 239460 INFO nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Libvirt host capabilities <capabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <host>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <uuid>3d58286e-1b14-486e-8cad-0bdb2d2969c4</uuid>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <arch>x86_64</arch>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model>EPYC-Rome-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <vendor>AMD</vendor>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <microcode version='16777317'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <signature family='23' model='49' stepping='0'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='x2apic'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='tsc-deadline'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='osxsave'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='hypervisor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='tsc_adjust'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='spec-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='stibp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='arch-capabilities'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='cmp_legacy'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='topoext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='virt-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='lbrv'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='tsc-scale'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='vmcb-clean'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='pause-filter'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='pfthreshold'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='svme-addr-chk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='rdctl-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='skip-l1dfl-vmentry'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='mds-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature name='pschange-mc-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <pages unit='KiB' size='4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <pages unit='KiB' size='2048'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <pages unit='KiB' size='1048576'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <power_management>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <suspend_mem/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </power_management>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <iommu support='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <migration_features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <live/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <uri_transports>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <uri_transport>tcp</uri_transport>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <uri_transport>rdma</uri_transport>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </uri_transports>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </migration_features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <topology>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <cells num='1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <cell id='0'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          <memory unit='KiB'>7864300</memory>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          <pages unit='KiB' size='4'>1966075</pages>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          <pages unit='KiB' size='2048'>0</pages>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          <distances>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <sibling id='0' value='10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          </distances>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          <cpus num='8'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:          </cpus>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        </cell>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </cells>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </topology>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <cache>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </cache>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <secmodel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model>selinux</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <doi>0</doi>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </secmodel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <secmodel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model>dac</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <doi>0</doi>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </secmodel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </host>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <guest>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <os_type>hvm</os_type>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <arch name='i686'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <wordsize>32</wordsize>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <domain type='qemu'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <domain type='kvm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </arch>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <pae/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <nonpae/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <acpi default='on' toggle='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <apic default='on' toggle='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <cpuselection/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <deviceboot/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <disksnapshot default='on' toggle='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <externalSnapshot/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </guest>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <guest>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <os_type>hvm</os_type>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <arch name='x86_64'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <wordsize>64</wordsize>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <domain type='qemu'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <domain type='kvm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </arch>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <acpi default='on' toggle='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <apic default='on' toggle='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <cpuselection/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <deviceboot/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <disksnapshot default='on' toggle='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <externalSnapshot/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </guest>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 
Jan 29 12:07:15 np0005601226 nova_compute[239456]: </capabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: #033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.383 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.401 239460 WARNING nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.402 239460 DEBUG nova.virt.libvirt.volume.mount [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.427 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 29 12:07:15 np0005601226 nova_compute[239456]: <domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <path>/usr/libexec/qemu-kvm</path>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <domain>kvm</domain>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <arch>i686</arch>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <vcpu max='4096'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <iothreads supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <os supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='firmware'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <loader supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>rom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pflash</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='readonly'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>yes</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='secure'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </loader>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-passthrough' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='hostPassthroughMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='maximum' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='maximumMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-model' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <vendor>AMD</vendor>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='x2apic'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-deadline'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='hypervisor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc_adjust'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='spec-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='stibp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='cmp_legacy'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='overflow-recov'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='succor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='amd-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='virt-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lbrv'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-scale'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='vmcb-clean'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='flushbyasid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pause-filter'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pfthreshold'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='svme-addr-chk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='disable' name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='custom' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Dhyana-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v6'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v7'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <memoryBacking supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='sourceType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>anonymous</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>memfd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </memoryBacking>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <disk supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='diskDevice'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>disk</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cdrom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>floppy</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>lun</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>fdc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>sata</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <graphics supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vnc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egl-headless</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </graphics>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <video supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='modelType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vga</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cirrus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>none</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>bochs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ramfb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hostdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='mode'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>subsystem</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='startupPolicy'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>mandatory</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>requisite</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>optional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='subsysType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pci</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='capsType'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='pciBackend'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hostdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <rng supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>random</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <filesystem supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='driverType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>path</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>handle</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtiofs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </filesystem>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tpm supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-tis</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-crb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emulator</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>external</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendVersion'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>2.0</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </tpm>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <redirdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </redirdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <channel supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </channel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <crypto supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </crypto>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <interface supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>passt</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <panic supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>isa</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>hyperv</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </panic>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <console supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>null</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dev</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pipe</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stdio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>udp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tcp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu-vdagent</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </console>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <gic supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <vmcoreinfo supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <genid supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backingStoreInput supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backup supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <async-teardown supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <s390-pv supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <ps2 supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tdx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sev supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sgx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hyperv supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='features'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>relaxed</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vapic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>spinlocks</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vpindex</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>runtime</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>synic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stimer</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reset</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vendor_id</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>frequencies</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reenlightenment</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tlbflush</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ipi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>avic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emsr_bitmap</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>xmm_input</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <spinlocks>4095</spinlocks>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <stimer_direct>on</stimer_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_direct>on</tlbflush_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_extended>on</tlbflush_extended>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hyperv>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <launchSecurity supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: </domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.432 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 29 12:07:15 np0005601226 nova_compute[239456]: <domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <path>/usr/libexec/qemu-kvm</path>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <domain>kvm</domain>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <arch>i686</arch>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <vcpu max='240'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <iothreads supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <os supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='firmware'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <loader supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>rom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pflash</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='readonly'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>yes</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='secure'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </loader>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-passthrough' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='hostPassthroughMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='maximum' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='maximumMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-model' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <vendor>AMD</vendor>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='x2apic'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-deadline'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='hypervisor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc_adjust'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='spec-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='stibp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='cmp_legacy'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='overflow-recov'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='succor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='amd-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='virt-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lbrv'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-scale'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='vmcb-clean'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='flushbyasid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pause-filter'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pfthreshold'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='svme-addr-chk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='disable' name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='custom' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Dhyana-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v6'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v7'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <memoryBacking supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='sourceType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>anonymous</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>memfd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </memoryBacking>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <disk supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='diskDevice'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>disk</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cdrom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>floppy</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>lun</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ide</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>fdc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>sata</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <graphics supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vnc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egl-headless</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </graphics>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <video supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='modelType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vga</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cirrus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>none</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>bochs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ramfb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hostdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='mode'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>subsystem</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='startupPolicy'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>mandatory</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>requisite</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>optional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='subsysType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pci</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='capsType'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='pciBackend'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hostdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <rng supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>random</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <filesystem supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='driverType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>path</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>handle</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtiofs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </filesystem>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tpm supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-tis</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-crb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emulator</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>external</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendVersion'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>2.0</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </tpm>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <redirdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </redirdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <channel supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </channel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <crypto supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </crypto>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <interface supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>passt</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <panic supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>isa</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>hyperv</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </panic>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <console supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>null</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dev</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pipe</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stdio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>udp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tcp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu-vdagent</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </console>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <gic supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <vmcoreinfo supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <genid supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backingStoreInput supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backup supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <async-teardown supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <s390-pv supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <ps2 supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tdx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sev supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sgx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hyperv supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='features'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>relaxed</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vapic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>spinlocks</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vpindex</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>runtime</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>synic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stimer</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reset</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vendor_id</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>frequencies</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reenlightenment</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tlbflush</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ipi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>avic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emsr_bitmap</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>xmm_input</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <spinlocks>4095</spinlocks>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <stimer_direct>on</stimer_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_direct>on</tlbflush_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_extended>on</tlbflush_extended>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hyperv>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <launchSecurity supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: </domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.478 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.482 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 29 12:07:15 np0005601226 nova_compute[239456]: <domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <path>/usr/libexec/qemu-kvm</path>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <domain>kvm</domain>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <arch>x86_64</arch>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <vcpu max='4096'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <iothreads supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <os supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='firmware'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>efi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <loader supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>rom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pflash</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='readonly'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>yes</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='secure'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>yes</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </loader>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-passthrough' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='hostPassthroughMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='maximum' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='maximumMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-model' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <vendor>AMD</vendor>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='x2apic'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-deadline'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='hypervisor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc_adjust'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='spec-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='stibp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='cmp_legacy'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='overflow-recov'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='succor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='amd-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='virt-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lbrv'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-scale'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='vmcb-clean'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='flushbyasid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pause-filter'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pfthreshold'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='svme-addr-chk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='disable' name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='custom' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Dhyana-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v6'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v7'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <memoryBacking supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='sourceType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>anonymous</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>memfd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </memoryBacking>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <disk supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='diskDevice'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>disk</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cdrom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>floppy</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>lun</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>fdc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>sata</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <graphics supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vnc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egl-headless</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </graphics>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <video supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='modelType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vga</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cirrus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>none</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>bochs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ramfb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hostdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='mode'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>subsystem</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='startupPolicy'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>mandatory</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>requisite</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>optional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='subsysType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pci</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='capsType'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='pciBackend'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hostdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <rng supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>random</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <filesystem supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='driverType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>path</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>handle</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtiofs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </filesystem>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tpm supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-tis</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-crb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emulator</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>external</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendVersion'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>2.0</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </tpm>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <redirdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </redirdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <channel supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </channel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <crypto supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </crypto>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <interface supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>passt</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <panic supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>isa</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>hyperv</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </panic>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <console supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>null</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dev</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pipe</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stdio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>udp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tcp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu-vdagent</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </console>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <gic supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <vmcoreinfo supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <genid supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backingStoreInput supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backup supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <async-teardown supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <s390-pv supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <ps2 supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tdx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sev supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sgx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hyperv supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='features'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>relaxed</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vapic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>spinlocks</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vpindex</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>runtime</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>synic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stimer</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reset</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vendor_id</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>frequencies</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reenlightenment</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tlbflush</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ipi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>avic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emsr_bitmap</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>xmm_input</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <spinlocks>4095</spinlocks>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <stimer_direct>on</stimer_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_direct>on</tlbflush_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_extended>on</tlbflush_extended>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hyperv>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <launchSecurity supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: </domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.544 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 29 12:07:15 np0005601226 nova_compute[239456]: <domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <path>/usr/libexec/qemu-kvm</path>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <domain>kvm</domain>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <arch>x86_64</arch>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <vcpu max='240'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <iothreads supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <os supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='firmware'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <loader supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>rom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pflash</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='readonly'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>yes</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='secure'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>no</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </loader>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-passthrough' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='hostPassthroughMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='maximum' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='maximumMigratable'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>on</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>off</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='host-model' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <vendor>AMD</vendor>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='x2apic'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-deadline'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='hypervisor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc_adjust'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='spec-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='stibp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='cmp_legacy'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='overflow-recov'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='succor'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='amd-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='virt-ssbd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lbrv'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='tsc-scale'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='vmcb-clean'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='flushbyasid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pause-filter'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='pfthreshold'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='svme-addr-chk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <feature policy='disable' name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <mode name='custom' supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Broadwell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cascadelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='ClearwaterForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ddpd-u'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sha512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm3'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sm4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Cooperlake-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Denverton-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Dhyana-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Genoa-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Milan-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Rome-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-Turin-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amd-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='auto-ibrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vp2intersect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fs-gs-base-ns'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibpb-brtype'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='no-nested-data-bp'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='null-sel-clr-base'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='perfmon-v2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbpb'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='srso-user-kernel-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='stibp-always-on'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='EPYC-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='GraniteRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-128'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-256'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx10-512'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='prefetchiti'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Haswell-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-noTSX'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v6'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Icelake-Server-v7'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='IvyBridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='KnightsMill-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4fmaps'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-4vnniw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512er'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512pf'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G4-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Opteron_G5-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fma4'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tbm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xop'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SapphireRapids-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='amx-tile'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-bf16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-fp16'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512-vpopcntdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bitalg'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vbmi2'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrc'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fzrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='la57'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='taa-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='tsx-ldtrk'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='SierraForest-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ifma'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-ne-convert'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx-vnni-int8'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bhi-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='bus-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cmpccxadd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fbsdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='fsrs'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ibrs-all'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='intel-psfd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ipred-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='lam'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mcdt-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pbrsb-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='psdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rrsba-ctrl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='sbdr-ssdp-no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='serialize'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vaes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='vpclmulqdq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Client-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='hle'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='rtm'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Skylake-Server-v5'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512bw'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512cd'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512dq'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512f'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='avx512vl'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='invpcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pcid'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='pku'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='mpx'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v2'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v3'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='core-capability'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='split-lock-detect'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='Snowridge-v4'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='cldemote'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='erms'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='gfni'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdir64b'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='movdiri'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='xsaves'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='athlon-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='core2duo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='coreduo-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='n270-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='ss'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <blockers model='phenom-v1'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnow'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <feature name='3dnowext'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </blockers>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </mode>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <memoryBacking supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <enum name='sourceType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>anonymous</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <value>memfd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </memoryBacking>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <disk supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='diskDevice'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>disk</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cdrom</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>floppy</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>lun</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ide</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>fdc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>sata</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <graphics supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vnc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egl-headless</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </graphics>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <video supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='modelType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vga</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>cirrus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>none</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>bochs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ramfb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hostdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='mode'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>subsystem</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='startupPolicy'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>mandatory</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>requisite</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>optional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='subsysType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pci</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>scsi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='capsType'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='pciBackend'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hostdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <rng supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtio-non-transitional</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>random</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>egd</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <filesystem supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='driverType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>path</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>handle</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>virtiofs</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </filesystem>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tpm supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-tis</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tpm-crb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emulator</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>external</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendVersion'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>2.0</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </tpm>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <redirdev supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='bus'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>usb</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </redirdev>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <channel supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </channel>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <crypto supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendModel'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>builtin</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </crypto>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <interface supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='backendType'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>default</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>passt</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <panic supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='model'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>isa</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>hyperv</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </panic>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <console supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='type'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>null</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vc</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pty</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dev</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>file</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>pipe</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stdio</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>udp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tcp</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>unix</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>qemu-vdagent</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>dbus</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </console>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <gic supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <vmcoreinfo supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <genid supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backingStoreInput supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <backup supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <async-teardown supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <s390-pv supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <ps2 supported='yes'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <tdx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sev supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <sgx supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <hyperv supported='yes'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <enum name='features'>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>relaxed</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vapic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>spinlocks</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vpindex</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>runtime</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>synic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>stimer</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reset</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>vendor_id</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>frequencies</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>reenlightenment</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>tlbflush</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>ipi</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>avic</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>emsr_bitmap</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <value>xmm_input</value>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </enum>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      <defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <spinlocks>4095</spinlocks>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <stimer_direct>on</stimer_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_direct>on</tlbflush_direct>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <tlbflush_extended>on</tlbflush_extended>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:      </defaults>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    </hyperv>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:    <launchSecurity supported='no'/>
Jan 29 12:07:15 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: </domainCapabilities>
Jan 29 12:07:15 np0005601226 nova_compute[239456]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.606 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.606 239460 INFO nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Secure Boot support detected#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.609 239460 INFO nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.609 239460 INFO nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.618 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.666 239460 INFO nova.virt.node [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Determined node identity 79259295-532c-4a51-8f50-027529735b0c from /var/lib/nova/compute_id#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.690 239460 WARNING nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Compute nodes ['79259295-532c-4a51-8f50-027529735b0c'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.735 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.776 239460 WARNING nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.776 239460 DEBUG oslo_concurrency.lockutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.777 239460 DEBUG oslo_concurrency.lockutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.777 239460 DEBUG oslo_concurrency.lockutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.777 239460 DEBUG nova.compute.resource_tracker [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:07:15 np0005601226 nova_compute[239456]: 2026-01-29 17:07:15.778 239460 DEBUG oslo_concurrency.processutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:07:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:07:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004337712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.301 239460 DEBUG oslo_concurrency.processutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:07:16 np0005601226 systemd[1]: Starting libvirt nodedev daemon...
Jan 29 12:07:16 np0005601226 systemd[1]: Started libvirt nodedev daemon.
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.564 239460 WARNING nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.566 239460 DEBUG nova.compute.resource_tracker [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5087MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.566 239460 DEBUG oslo_concurrency.lockutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.566 239460 DEBUG oslo_concurrency.lockutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.584 239460 WARNING nova.compute.resource_tracker [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] No compute node record for compute-0.ctlplane.example.com:79259295-532c-4a51-8f50-027529735b0c: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 79259295-532c-4a51-8f50-027529735b0c could not be found.#033[00m
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.601 239460 INFO nova.compute.resource_tracker [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 79259295-532c-4a51-8f50-027529735b0c#033[00m
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.661 239460 DEBUG nova.compute.resource_tracker [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:07:16 np0005601226 nova_compute[239456]: 2026-01-29 17:07:16.661 239460 DEBUG nova.compute.resource_tracker [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:07:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:17 np0005601226 nova_compute[239456]: 2026-01-29 17:07:17.540 239460 INFO nova.scheduler.client.report [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [req-47907ada-183e-4cd2-82d8-f9798d2e5a71] Created resource provider record via placement API for resource provider with UUID 79259295-532c-4a51-8f50-027529735b0c and name compute-0.ctlplane.example.com.#033[00m
Jan 29 12:07:17 np0005601226 nova_compute[239456]: 2026-01-29 17:07:17.939 239460 DEBUG oslo_concurrency.processutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:07:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:07:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/27391312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.496 239460 DEBUG oslo_concurrency.processutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.502 239460 DEBUG nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 29 12:07:18 np0005601226 nova_compute[239456]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.503 239460 INFO nova.virt.libvirt.host [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.504 239460 DEBUG nova.compute.provider_tree [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.504 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.565 239460 DEBUG nova.scheduler.client.report [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Updated inventory for provider 79259295-532c-4a51-8f50-027529735b0c with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.566 239460 DEBUG nova.compute.provider_tree [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Updating resource provider 79259295-532c-4a51-8f50-027529735b0c generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.566 239460 DEBUG nova.compute.provider_tree [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.672 239460 DEBUG nova.compute.provider_tree [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Updating resource provider 79259295-532c-4a51-8f50-027529735b0c generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.704 239460 DEBUG nova.compute.resource_tracker [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.704 239460 DEBUG oslo_concurrency.lockutils [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.705 239460 DEBUG nova.service [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 29 12:07:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.822 239460 DEBUG nova.service [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 29 12:07:18 np0005601226 nova_compute[239456]: 2026-01-29 17:07:18.823 239460 DEBUG nova.servicegroup.drivers.db [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 29 12:07:18 np0005601226 podman[239827]: 2026-01-29 17:07:18.898957961 +0000 UTC m=+0.063711399 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 29 12:07:18 np0005601226 podman[239828]: 2026-01-29 17:07:18.923938988 +0000 UTC m=+0.087499513 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 29 12:07:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3644092794' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3644092794' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604622388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:07:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1604622388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:07:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:07:40.269 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:07:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:07:40.270 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:07:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:07:40.270 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:07:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:07:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111830066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:07:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:07:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3111830066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:07:40
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', 'volumes', 'images', 'cephfs.cephfs.data', 'vms']
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:07:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:49 np0005601226 podman[239869]: 2026-01-29 17:07:49.869986719 +0000 UTC m=+0.045160089 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:07:49 np0005601226 podman[239870]: 2026-01-29 17:07:49.886961504 +0000 UTC m=+0.062374550 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:07:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:07:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:07:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:07:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:07:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:08:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:08:05 np0005601226 podman[240061]: 2026-01-29 17:08:05.882484141 +0000 UTC m=+0.096131785 container create b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:08:05 np0005601226 podman[240061]: 2026-01-29 17:08:05.805514903 +0000 UTC m=+0.019162577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:08:05 np0005601226 systemd[1]: Started libpod-conmon-b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94.scope.
Jan 29 12:08:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:08:06 np0005601226 podman[240061]: 2026-01-29 17:08:06.052802378 +0000 UTC m=+0.266450042 container init b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:08:06 np0005601226 podman[240061]: 2026-01-29 17:08:06.059352186 +0000 UTC m=+0.272999830 container start b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_bohr, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:08:06 np0005601226 festive_bohr[240077]: 167 167
Jan 29 12:08:06 np0005601226 systemd[1]: libpod-b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94.scope: Deactivated successfully.
Jan 29 12:08:06 np0005601226 podman[240061]: 2026-01-29 17:08:06.082742088 +0000 UTC m=+0.296389762 container attach b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_bohr, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:08:06 np0005601226 podman[240061]: 2026-01-29 17:08:06.083639752 +0000 UTC m=+0.297287426 container died b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:08:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:08:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:08:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:08:06 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2629a3a277e2dd9d5723730d183694093b1cb83e87ec4e2399252d5ffbcaf1b6-merged.mount: Deactivated successfully.
Jan 29 12:08:06 np0005601226 podman[240061]: 2026-01-29 17:08:06.341658661 +0000 UTC m=+0.555306305 container remove b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 12:08:06 np0005601226 systemd[1]: libpod-conmon-b736f1d6e772c10a82b19e346e00a05d5aa8bef271f801c0c0d2abd80eda3f94.scope: Deactivated successfully.
Jan 29 12:08:06 np0005601226 podman[240102]: 2026-01-29 17:08:06.455798008 +0000 UTC m=+0.022143478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:08:06 np0005601226 podman[240102]: 2026-01-29 17:08:06.565574805 +0000 UTC m=+0.131920255 container create e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kirch, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:08:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:06 np0005601226 systemd[1]: Started libpod-conmon-e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8.scope.
Jan 29 12:08:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:08:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88485831e6d875dd24b2720f205327b39a62b2cf4102ac1fd01fb874907a1127/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88485831e6d875dd24b2720f205327b39a62b2cf4102ac1fd01fb874907a1127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88485831e6d875dd24b2720f205327b39a62b2cf4102ac1fd01fb874907a1127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88485831e6d875dd24b2720f205327b39a62b2cf4102ac1fd01fb874907a1127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88485831e6d875dd24b2720f205327b39a62b2cf4102ac1fd01fb874907a1127/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:06 np0005601226 podman[240102]: 2026-01-29 17:08:06.862872989 +0000 UTC m=+0.429218469 container init e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kirch, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:08:06 np0005601226 podman[240102]: 2026-01-29 17:08:06.867845676 +0000 UTC m=+0.434191126 container start e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kirch, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 29 12:08:06 np0005601226 podman[240102]: 2026-01-29 17:08:06.912597942 +0000 UTC m=+0.478943422 container attach e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:08:07 np0005601226 dazzling_kirch[240119]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:08:07 np0005601226 dazzling_kirch[240119]: --> All data devices are unavailable
Jan 29 12:08:07 np0005601226 systemd[1]: libpod-e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8.scope: Deactivated successfully.
Jan 29 12:08:07 np0005601226 podman[240139]: 2026-01-29 17:08:07.281035116 +0000 UTC m=+0.021424078 container died e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:08:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-88485831e6d875dd24b2720f205327b39a62b2cf4102ac1fd01fb874907a1127-merged.mount: Deactivated successfully.
Jan 29 12:08:07 np0005601226 podman[240139]: 2026-01-29 17:08:07.470528257 +0000 UTC m=+0.210917199 container remove e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:08:07 np0005601226 systemd[1]: libpod-conmon-e97b28ea13e8e7ae24db76b45f3e3e13f84f0e8eee42c09cf8b7baf954d132a8.scope: Deactivated successfully.
Jan 29 12:08:07 np0005601226 podman[240216]: 2026-01-29 17:08:07.84405765 +0000 UTC m=+0.017884421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:08:07 np0005601226 podman[240216]: 2026-01-29 17:08:07.941070347 +0000 UTC m=+0.114897088 container create 1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_tu, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:08:08 np0005601226 systemd[1]: Started libpod-conmon-1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493.scope.
Jan 29 12:08:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:08:08 np0005601226 podman[240216]: 2026-01-29 17:08:08.079936832 +0000 UTC m=+0.253763593 container init 1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 12:08:08 np0005601226 podman[240216]: 2026-01-29 17:08:08.085533595 +0000 UTC m=+0.259360336 container start 1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_tu, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 12:08:08 np0005601226 loving_tu[240231]: 167 167
Jan 29 12:08:08 np0005601226 systemd[1]: libpod-1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493.scope: Deactivated successfully.
Jan 29 12:08:08 np0005601226 podman[240216]: 2026-01-29 17:08:08.094372598 +0000 UTC m=+0.268199339 container attach 1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_tu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:08:08 np0005601226 podman[240216]: 2026-01-29 17:08:08.094712207 +0000 UTC m=+0.268538948 container died 1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_tu, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:08:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2b9e1d156fa7de6cd4577bef1982bb5de93c539be9532e8dd31c9b73a16a8287-merged.mount: Deactivated successfully.
Jan 29 12:08:08 np0005601226 podman[240216]: 2026-01-29 17:08:08.347737199 +0000 UTC m=+0.521563930 container remove 1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_tu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:08:08 np0005601226 systemd[1]: libpod-conmon-1705f0442eb777ca2a0ee3837b0bc6d4fbe58c9496c3db78440625588eb32493.scope: Deactivated successfully.
Jan 29 12:08:08 np0005601226 podman[240257]: 2026-01-29 17:08:08.497931263 +0000 UTC m=+0.073073103 container create edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bell, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 29 12:08:08 np0005601226 podman[240257]: 2026-01-29 17:08:08.443464871 +0000 UTC m=+0.018606731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:08:08 np0005601226 systemd[1]: Started libpod-conmon-edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56.scope.
Jan 29 12:08:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:08:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2994044a208cef02035f665fa2dc677b9fddc77cd6f5b7954e8a446320d3c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2994044a208cef02035f665fa2dc677b9fddc77cd6f5b7954e8a446320d3c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2994044a208cef02035f665fa2dc677b9fddc77cd6f5b7954e8a446320d3c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:08 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2994044a208cef02035f665fa2dc677b9fddc77cd6f5b7954e8a446320d3c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:08 np0005601226 podman[240257]: 2026-01-29 17:08:08.760695852 +0000 UTC m=+0.335837712 container init edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:08:08 np0005601226 podman[240257]: 2026-01-29 17:08:08.766004057 +0000 UTC m=+0.341145897 container start edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 12:08:08 np0005601226 podman[240257]: 2026-01-29 17:08:08.892599965 +0000 UTC m=+0.467741805 container attach edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 12:08:09 np0005601226 clever_bell[240273]: {
Jan 29 12:08:09 np0005601226 clever_bell[240273]:    "0": [
Jan 29 12:08:09 np0005601226 clever_bell[240273]:        {
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "devices": [
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "/dev/loop3"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            ],
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_name": "ceph_lv0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_size": "21470642176",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "name": "ceph_lv0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "tags": {
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cluster_name": "ceph",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.crush_device_class": "",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.encrypted": "0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.objectstore": "bluestore",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osd_id": "0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.type": "block",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.vdo": "0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.with_tpm": "0"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            },
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "type": "block",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "vg_name": "ceph_vg0"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:        }
Jan 29 12:08:09 np0005601226 clever_bell[240273]:    ],
Jan 29 12:08:09 np0005601226 clever_bell[240273]:    "1": [
Jan 29 12:08:09 np0005601226 clever_bell[240273]:        {
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "devices": [
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "/dev/loop4"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            ],
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_name": "ceph_lv1",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_size": "21470642176",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "name": "ceph_lv1",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "tags": {
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cluster_name": "ceph",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.crush_device_class": "",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.encrypted": "0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.objectstore": "bluestore",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osd_id": "1",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.type": "block",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.vdo": "0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.with_tpm": "0"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            },
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "type": "block",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "vg_name": "ceph_vg1"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:        }
Jan 29 12:08:09 np0005601226 clever_bell[240273]:    ],
Jan 29 12:08:09 np0005601226 clever_bell[240273]:    "2": [
Jan 29 12:08:09 np0005601226 clever_bell[240273]:        {
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "devices": [
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "/dev/loop5"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            ],
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_name": "ceph_lv2",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_size": "21470642176",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "name": "ceph_lv2",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "tags": {
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.cluster_name": "ceph",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.crush_device_class": "",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.encrypted": "0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.objectstore": "bluestore",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osd_id": "2",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.type": "block",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.vdo": "0",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:                "ceph.with_tpm": "0"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            },
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "type": "block",
Jan 29 12:08:09 np0005601226 clever_bell[240273]:            "vg_name": "ceph_vg2"
Jan 29 12:08:09 np0005601226 clever_bell[240273]:        }
Jan 29 12:08:09 np0005601226 clever_bell[240273]:    ]
Jan 29 12:08:09 np0005601226 clever_bell[240273]: }
Jan 29 12:08:09 np0005601226 systemd[1]: libpod-edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56.scope: Deactivated successfully.
Jan 29 12:08:09 np0005601226 podman[240257]: 2026-01-29 17:08:09.033694741 +0000 UTC m=+0.608836581 container died edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 12:08:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cf2994044a208cef02035f665fa2dc677b9fddc77cd6f5b7954e8a446320d3c1-merged.mount: Deactivated successfully.
Jan 29 12:08:09 np0005601226 podman[240257]: 2026-01-29 17:08:09.85500078 +0000 UTC m=+1.430142620 container remove edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_bell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:08:09 np0005601226 systemd[1]: libpod-conmon-edf4d92c1f4ff82dde121eb590568ae9e796948e8d4766c21db81c1b9a1f8f56.scope: Deactivated successfully.
Jan 29 12:08:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:10 np0005601226 podman[240357]: 2026-01-29 17:08:10.276367974 +0000 UTC m=+0.026974000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:08:10 np0005601226 podman[240357]: 2026-01-29 17:08:10.409917563 +0000 UTC m=+0.160523559 container create 760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:08:10 np0005601226 systemd[1]: Started libpod-conmon-760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449.scope.
Jan 29 12:08:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:08:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:08:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:08:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:08:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:08:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:08:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:08:10 np0005601226 podman[240357]: 2026-01-29 17:08:10.740695115 +0000 UTC m=+0.491301141 container init 760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 12:08:10 np0005601226 podman[240357]: 2026-01-29 17:08:10.747039858 +0000 UTC m=+0.497645844 container start 760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 12:08:10 np0005601226 stupefied_proskuriakova[240373]: 167 167
Jan 29 12:08:10 np0005601226 systemd[1]: libpod-760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449.scope: Deactivated successfully.
Jan 29 12:08:10 np0005601226 conmon[240373]: conmon 760fdb8cda8161c125a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449.scope/container/memory.events
Jan 29 12:08:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:10 np0005601226 podman[240357]: 2026-01-29 17:08:10.872848115 +0000 UTC m=+0.623454111 container attach 760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:08:10 np0005601226 podman[240357]: 2026-01-29 17:08:10.873557605 +0000 UTC m=+0.624163631 container died 760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_proskuriakova, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 12:08:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6bed746a4b7541cea8f42c2bbbfe9cb1f933427ecca2f2f927eecef4e8a710b9-merged.mount: Deactivated successfully.
Jan 29 12:08:11 np0005601226 podman[240357]: 2026-01-29 17:08:11.299285278 +0000 UTC m=+1.049891314 container remove 760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:08:11 np0005601226 systemd[1]: libpod-conmon-760fdb8cda8161c125a51f84cd9d0b5dc17609a969409b80d50eee00d935d449.scope: Deactivated successfully.
Jan 29 12:08:11 np0005601226 podman[240397]: 2026-01-29 17:08:11.483463923 +0000 UTC m=+0.076570688 container create a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:08:11 np0005601226 systemd[1]: Started libpod-conmon-a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be.scope.
Jan 29 12:08:11 np0005601226 podman[240397]: 2026-01-29 17:08:11.432723893 +0000 UTC m=+0.025830758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:08:11 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:08:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427535e269f77e620e1197e46087679df1a737338bd61cb446e60a526783c63c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427535e269f77e620e1197e46087679df1a737338bd61cb446e60a526783c63c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427535e269f77e620e1197e46087679df1a737338bd61cb446e60a526783c63c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/427535e269f77e620e1197e46087679df1a737338bd61cb446e60a526783c63c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:08:11 np0005601226 podman[240397]: 2026-01-29 17:08:11.559020063 +0000 UTC m=+0.152126938 container init a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:08:11 np0005601226 podman[240397]: 2026-01-29 17:08:11.564948646 +0000 UTC m=+0.158055411 container start a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 12:08:11 np0005601226 podman[240397]: 2026-01-29 17:08:11.575000571 +0000 UTC m=+0.168107396 container attach a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:08:12 np0005601226 lvm[240492]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:08:12 np0005601226 lvm[240492]: VG ceph_vg0 finished
Jan 29 12:08:12 np0005601226 lvm[240493]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:08:12 np0005601226 lvm[240493]: VG ceph_vg1 finished
Jan 29 12:08:12 np0005601226 lvm[240495]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:08:12 np0005601226 lvm[240495]: VG ceph_vg2 finished
Jan 29 12:08:12 np0005601226 sharp_banzai[240414]: {}
Jan 29 12:08:12 np0005601226 systemd[1]: libpod-a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be.scope: Deactivated successfully.
Jan 29 12:08:12 np0005601226 systemd[1]: libpod-a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be.scope: Consumed 1.088s CPU time.
Jan 29 12:08:12 np0005601226 podman[240397]: 2026-01-29 17:08:12.311660382 +0000 UTC m=+0.904767147 container died a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:08:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-427535e269f77e620e1197e46087679df1a737338bd61cb446e60a526783c63c-merged.mount: Deactivated successfully.
Jan 29 12:08:12 np0005601226 podman[240397]: 2026-01-29 17:08:12.352872241 +0000 UTC m=+0.945979006 container remove a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:08:12 np0005601226 systemd[1]: libpod-conmon-a9fcd7e0fb12507cc3ebd9934e26fdaef4145eb7b37e53bd84106765303593be.scope: Deactivated successfully.
Jan 29 12:08:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:08:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:08:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:08:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:08:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:08:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:08:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.824 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.826 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.826 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.826 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.842 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.842 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.843 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.843 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.843 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.843 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.843 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.859 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.860 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.860 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.888 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.888 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.888 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.888 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:08:15 np0005601226 nova_compute[239456]: 2026-01-29 17:08:15.889 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:08:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:08:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151565998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.435 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.566 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.567 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5086MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.567 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.568 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.636 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.637 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:08:16 np0005601226 nova_compute[239456]: 2026-01-29 17:08:16.656 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:08:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:08:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219398026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:08:17 np0005601226 nova_compute[239456]: 2026-01-29 17:08:17.162 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:08:17 np0005601226 nova_compute[239456]: 2026-01-29 17:08:17.167 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:08:17 np0005601226 nova_compute[239456]: 2026-01-29 17:08:17.234 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:08:17 np0005601226 nova_compute[239456]: 2026-01-29 17:08:17.236 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:08:17 np0005601226 nova_compute[239456]: 2026-01-29 17:08:17.236 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:08:17 np0005601226 nova_compute[239456]: 2026-01-29 17:08:17.237 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:08:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:20 np0005601226 podman[240580]: 2026-01-29 17:08:20.900028955 +0000 UTC m=+0.069214788 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 29 12:08:20 np0005601226 podman[240581]: 2026-01-29 17:08:20.900247031 +0000 UTC m=+0.069101984 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 29 12:08:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:08:40.270 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:08:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:08:40.271 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:08:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:08:40.271 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:08:40
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'vms']
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:08:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:08:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:08:51 np0005601226 podman[240625]: 2026-01-29 17:08:51.871025744 +0000 UTC m=+0.042970409 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 29 12:08:51 np0005601226 podman[240626]: 2026-01-29 17:08:51.92894965 +0000 UTC m=+0.096657679 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 29 12:08:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:08:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:08:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 29 12:09:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38777620' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 29 12:09:03 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 29 12:09:03 np0005601226 ceph-mgr[75527]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 12:09:03 np0005601226 ceph-mgr[75527]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 12:09:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:09:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:09:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:09:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:09:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:09:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:09:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:09:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:09:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:09:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:09:13 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:09:13 np0005601226 podman[240814]: 2026-01-29 17:09:13.318278911 +0000 UTC m=+0.042734341 container create 7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:09:13 np0005601226 systemd[1]: Started libpod-conmon-7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924.scope.
Jan 29 12:09:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:09:13 np0005601226 podman[240814]: 2026-01-29 17:09:13.39049777 +0000 UTC m=+0.114953260 container init 7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cartwright, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:09:13 np0005601226 podman[240814]: 2026-01-29 17:09:13.293701818 +0000 UTC m=+0.018157278 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:09:13 np0005601226 podman[240814]: 2026-01-29 17:09:13.395787825 +0000 UTC m=+0.120243255 container start 7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:09:13 np0005601226 flamboyant_cartwright[240830]: 167 167
Jan 29 12:09:13 np0005601226 systemd[1]: libpod-7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924.scope: Deactivated successfully.
Jan 29 12:09:13 np0005601226 podman[240814]: 2026-01-29 17:09:13.400317609 +0000 UTC m=+0.124773059 container attach 7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cartwright, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:09:13 np0005601226 podman[240814]: 2026-01-29 17:09:13.400598606 +0000 UTC m=+0.125054036 container died 7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:09:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-40f42962047cb0e3e200d56ae88d0094267ac9cb3a166f6b43bc3417f589448d-merged.mount: Deactivated successfully.
Jan 29 12:09:13 np0005601226 podman[240814]: 2026-01-29 17:09:13.447764689 +0000 UTC m=+0.172220119 container remove 7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:09:13 np0005601226 systemd[1]: libpod-conmon-7a8892c5204ae74b540f3de17012324c31fead5016049b57e5fbe733e457e924.scope: Deactivated successfully.
Jan 29 12:09:13 np0005601226 podman[240858]: 2026-01-29 17:09:13.611054033 +0000 UTC m=+0.073426453 container create 68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:09:13 np0005601226 podman[240858]: 2026-01-29 17:09:13.559113749 +0000 UTC m=+0.021486189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:09:13 np0005601226 systemd[1]: Started libpod-conmon-68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d.scope.
Jan 29 12:09:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:09:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f79ffa6e6e85113f0a668c45431dc97031a627c27cc53d953a887cf150a22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f79ffa6e6e85113f0a668c45431dc97031a627c27cc53d953a887cf150a22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f79ffa6e6e85113f0a668c45431dc97031a627c27cc53d953a887cf150a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f79ffa6e6e85113f0a668c45431dc97031a627c27cc53d953a887cf150a22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f79ffa6e6e85113f0a668c45431dc97031a627c27cc53d953a887cf150a22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:13 np0005601226 podman[240858]: 2026-01-29 17:09:13.716172922 +0000 UTC m=+0.178545342 container init 68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:09:13 np0005601226 podman[240858]: 2026-01-29 17:09:13.72267023 +0000 UTC m=+0.185042640 container start 68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_edison, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:09:13 np0005601226 podman[240858]: 2026-01-29 17:09:13.728015756 +0000 UTC m=+0.190388196 container attach 68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:09:14 np0005601226 goofy_edison[240875]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:09:14 np0005601226 goofy_edison[240875]: --> All data devices are unavailable
Jan 29 12:09:14 np0005601226 systemd[1]: libpod-68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d.scope: Deactivated successfully.
Jan 29 12:09:14 np0005601226 podman[240858]: 2026-01-29 17:09:14.119787239 +0000 UTC m=+0.582159669 container died 68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_edison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:09:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay-be8f79ffa6e6e85113f0a668c45431dc97031a627c27cc53d953a887cf150a22-merged.mount: Deactivated successfully.
Jan 29 12:09:14 np0005601226 podman[240858]: 2026-01-29 17:09:14.693918948 +0000 UTC m=+1.156291358 container remove 68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:09:14 np0005601226 systemd[1]: libpod-conmon-68f11d17f7ddc096aa72b1c44a3cec5510e495b9d5ecf9f128ad0f9cf073fa3d.scope: Deactivated successfully.
Jan 29 12:09:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:09:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3249 writes, 14K keys, 3249 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3249 writes, 3249 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1278 writes, 5560 keys, 1278 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s#012Interval WAL: 1278 writes, 1278 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.6      1.14              0.03         6    0.190       0      0       0.0       0.0#012  L6      1/0    7.63 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     33.4     27.7      1.38              0.08         5    0.275     19K   2212       0.0       0.0#012 Sum      1/0    7.63 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.4     18.3     21.3      2.51              0.11        11    0.229     19K   2212       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     64.2     65.3      0.45              0.05         6    0.074     12K   1465       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     33.4     27.7      1.38              0.08         5    0.275     19K   2212       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.7      1.13              0.03         5    0.226       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 2.5 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d2b32758d0#2 capacity: 308.00 MB usage: 1.86 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(94,1.67 MB,0.542911%) FilterBlock(12,63.30 KB,0.0200693%) IndexBlock(12,130.41 KB,0.0413474%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 29 12:09:15 np0005601226 podman[240969]: 2026-01-29 17:09:15.067445902 +0000 UTC m=+0.020714209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:09:15 np0005601226 podman[240969]: 2026-01-29 17:09:15.286928855 +0000 UTC m=+0.240197182 container create 7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nightingale, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:09:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:15 np0005601226 systemd[1]: Started libpod-conmon-7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8.scope.
Jan 29 12:09:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:09:15 np0005601226 podman[240969]: 2026-01-29 17:09:15.430564319 +0000 UTC m=+0.383832656 container init 7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:09:15 np0005601226 podman[240969]: 2026-01-29 17:09:15.435059942 +0000 UTC m=+0.388328229 container start 7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nightingale, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:09:15 np0005601226 nervous_nightingale[240985]: 167 167
Jan 29 12:09:15 np0005601226 systemd[1]: libpod-7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8.scope: Deactivated successfully.
Jan 29 12:09:15 np0005601226 conmon[240985]: conmon 7cee5bf1dcae3793f788 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8.scope/container/memory.events
Jan 29 12:09:15 np0005601226 podman[240969]: 2026-01-29 17:09:15.441939981 +0000 UTC m=+0.395208298 container attach 7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nightingale, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:09:15 np0005601226 podman[240969]: 2026-01-29 17:09:15.442339242 +0000 UTC m=+0.395607529 container died 7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nightingale, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:09:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay-52bbf402f9ac5601bc16b15b88e2abbd5c9ae4b8d998281bc6273671eca047df-merged.mount: Deactivated successfully.
Jan 29 12:09:15 np0005601226 podman[240969]: 2026-01-29 17:09:15.492237019 +0000 UTC m=+0.445505306 container remove 7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nightingale, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:09:15 np0005601226 systemd[1]: libpod-conmon-7cee5bf1dcae3793f788477741a694b29d277d04137ee90c0403d8839bdbd1d8.scope: Deactivated successfully.
Jan 29 12:09:15 np0005601226 podman[241009]: 2026-01-29 17:09:15.617348407 +0000 UTC m=+0.048218863 container create a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hamilton, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 12:09:15 np0005601226 systemd[1]: Started libpod-conmon-a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402.scope.
Jan 29 12:09:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:09:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2b5cbae636fb49b93748acf014d6f7f279fae8743418fcde967ca75ed93b4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2b5cbae636fb49b93748acf014d6f7f279fae8743418fcde967ca75ed93b4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2b5cbae636fb49b93748acf014d6f7f279fae8743418fcde967ca75ed93b4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:15 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e2b5cbae636fb49b93748acf014d6f7f279fae8743418fcde967ca75ed93b4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:15 np0005601226 podman[241009]: 2026-01-29 17:09:15.592965518 +0000 UTC m=+0.023836024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:09:15 np0005601226 podman[241009]: 2026-01-29 17:09:15.693544894 +0000 UTC m=+0.124415370 container init a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hamilton, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:09:15 np0005601226 podman[241009]: 2026-01-29 17:09:15.698674785 +0000 UTC m=+0.129545241 container start a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:09:15 np0005601226 podman[241009]: 2026-01-29 17:09:15.707242559 +0000 UTC m=+0.138113045 container attach a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]: {
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:    "0": [
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:        {
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "devices": [
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "/dev/loop3"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            ],
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_name": "ceph_lv0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_size": "21470642176",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "name": "ceph_lv0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "tags": {
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cluster_name": "ceph",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.crush_device_class": "",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.encrypted": "0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.objectstore": "bluestore",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osd_id": "0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.type": "block",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.vdo": "0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.with_tpm": "0"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            },
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "type": "block",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "vg_name": "ceph_vg0"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:        }
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:    ],
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:    "1": [
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:        {
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "devices": [
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "/dev/loop4"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            ],
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_name": "ceph_lv1",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_size": "21470642176",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "name": "ceph_lv1",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "tags": {
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cluster_name": "ceph",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.crush_device_class": "",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.encrypted": "0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.objectstore": "bluestore",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osd_id": "1",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.type": "block",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.vdo": "0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.with_tpm": "0"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            },
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "type": "block",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "vg_name": "ceph_vg1"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:        }
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:    ],
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:    "2": [
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:        {
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "devices": [
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "/dev/loop5"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            ],
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_name": "ceph_lv2",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_size": "21470642176",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "name": "ceph_lv2",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "tags": {
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.cluster_name": "ceph",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.crush_device_class": "",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.encrypted": "0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.objectstore": "bluestore",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osd_id": "2",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.type": "block",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.vdo": "0",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:                "ceph.with_tpm": "0"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            },
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "type": "block",
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:            "vg_name": "ceph_vg2"
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:        }
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]:    ]
Jan 29 12:09:15 np0005601226 blissful_hamilton[241025]: }
Jan 29 12:09:15 np0005601226 systemd[1]: libpod-a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402.scope: Deactivated successfully.
Jan 29 12:09:15 np0005601226 podman[241034]: 2026-01-29 17:09:15.988248497 +0000 UTC m=+0.021276754 container died a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hamilton, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 12:09:16 np0005601226 nova_compute[239456]: 2026-01-29 17:09:16.011 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:16 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2e2b5cbae636fb49b93748acf014d6f7f279fae8743418fcde967ca75ed93b4c-merged.mount: Deactivated successfully.
Jan 29 12:09:16 np0005601226 podman[241034]: 2026-01-29 17:09:16.043477781 +0000 UTC m=+0.076506008 container remove a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hamilton, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 12:09:16 np0005601226 systemd[1]: libpod-conmon-a232673d0701cfa9b23a52aa2c18df7bdebaf065ceffb8fb9867190105092402.scope: Deactivated successfully.
Jan 29 12:09:16 np0005601226 podman[241111]: 2026-01-29 17:09:16.442849831 +0000 UTC m=+0.036588423 container create d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 12:09:16 np0005601226 systemd[1]: Started libpod-conmon-d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87.scope.
Jan 29 12:09:16 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:09:16 np0005601226 podman[241111]: 2026-01-29 17:09:16.498339162 +0000 UTC m=+0.092077764 container init d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_gauss, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:09:16 np0005601226 podman[241111]: 2026-01-29 17:09:16.506466585 +0000 UTC m=+0.100205167 container start d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_gauss, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:09:16 np0005601226 podman[241111]: 2026-01-29 17:09:16.510247048 +0000 UTC m=+0.103985650 container attach d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_gauss, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:09:16 np0005601226 dazzling_gauss[241128]: 167 167
Jan 29 12:09:16 np0005601226 systemd[1]: libpod-d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87.scope: Deactivated successfully.
Jan 29 12:09:16 np0005601226 conmon[241128]: conmon d3b8893a27a464387303 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87.scope/container/memory.events
Jan 29 12:09:16 np0005601226 podman[241111]: 2026-01-29 17:09:16.511874122 +0000 UTC m=+0.105612704 container died d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_gauss, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 12:09:16 np0005601226 podman[241111]: 2026-01-29 17:09:16.42600716 +0000 UTC m=+0.019745762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:09:16 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6c0f1854a8fcf55971b23b62f18111d28cd30a069aefc70a26c0a0636401ee0d-merged.mount: Deactivated successfully.
Jan 29 12:09:16 np0005601226 podman[241111]: 2026-01-29 17:09:16.611520633 +0000 UTC m=+0.205259215 container remove d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_gauss, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:09:16 np0005601226 systemd[1]: libpod-conmon-d3b8893a27a464387303534169032f46d2e890dd3a2cff3d4efb582cee8adb87.scope: Deactivated successfully.
Jan 29 12:09:16 np0005601226 podman[241151]: 2026-01-29 17:09:16.763552397 +0000 UTC m=+0.068980520 container create 2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lumiere, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:09:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:16 np0005601226 systemd[1]: Started libpod-conmon-2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6.scope.
Jan 29 12:09:16 np0005601226 podman[241151]: 2026-01-29 17:09:16.714844952 +0000 UTC m=+0.020273095 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:09:16 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:09:16 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e34c2a14ed96ed0d55e3b8a2f67297e459338720783496df6483f9d892da5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:16 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e34c2a14ed96ed0d55e3b8a2f67297e459338720783496df6483f9d892da5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:16 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e34c2a14ed96ed0d55e3b8a2f67297e459338720783496df6483f9d892da5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:16 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e34c2a14ed96ed0d55e3b8a2f67297e459338720783496df6483f9d892da5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:09:16 np0005601226 podman[241151]: 2026-01-29 17:09:16.93958183 +0000 UTC m=+0.245010013 container init 2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lumiere, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:09:16 np0005601226 podman[241151]: 2026-01-29 17:09:16.944476743 +0000 UTC m=+0.249904866 container start 2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:09:16 np0005601226 podman[241151]: 2026-01-29 17:09:16.965767337 +0000 UTC m=+0.271195500 container attach 2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lumiere, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.251 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.252 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.252 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:09:17 np0005601226 lvm[241247]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:09:17 np0005601226 lvm[241247]: VG ceph_vg0 finished
Jan 29 12:09:17 np0005601226 lvm[241248]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:09:17 np0005601226 lvm[241248]: VG ceph_vg1 finished
Jan 29 12:09:17 np0005601226 lvm[241250]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:09:17 np0005601226 lvm[241250]: VG ceph_vg2 finished
Jan 29 12:09:17 np0005601226 inspiring_lumiere[241169]: {}
Jan 29 12:09:17 np0005601226 systemd[1]: libpod-2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6.scope: Deactivated successfully.
Jan 29 12:09:17 np0005601226 podman[241151]: 2026-01-29 17:09:17.658903196 +0000 UTC m=+0.964331329 container died 2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 12:09:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay-93e34c2a14ed96ed0d55e3b8a2f67297e459338720783496df6483f9d892da5a-merged.mount: Deactivated successfully.
Jan 29 12:09:17 np0005601226 podman[241151]: 2026-01-29 17:09:17.697990777 +0000 UTC m=+1.003418900 container remove 2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lumiere, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:09:17 np0005601226 systemd[1]: libpod-conmon-2a86096cd0e064f78e23ef997bb07176c6640735595620e7ceb9c0d7b93d1ba6.scope: Deactivated successfully.
Jan 29 12:09:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:09:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:09:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:09:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.897 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.898 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.899 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.899 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.899 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.899 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.899 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.899 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:09:17 np0005601226 nova_compute[239456]: 2026-01-29 17:09:17.899 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.284 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.285 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.285 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.285 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.286 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:09:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:09:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:09:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:09:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636246712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.809 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.952 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.954 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5076MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.954 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:09:18 np0005601226 nova_compute[239456]: 2026-01-29 17:09:18.954 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.048 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.049 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.065 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:09:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:09:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3228850435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.612 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.617 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.757 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.759 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:09:19 np0005601226 nova_compute[239456]: 2026-01-29 17:09:19.759 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:09:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:20 np0005601226 nova_compute[239456]: 2026-01-29 17:09:20.347 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:09:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:22 np0005601226 podman[241335]: 2026-01-29 17:09:22.894856436 +0000 UTC m=+0.065966998 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:09:22 np0005601226 podman[241334]: 2026-01-29 17:09:22.90086282 +0000 UTC m=+0.074897862 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 29 12:09:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 29 12:09:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1232434429' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 29 12:09:27 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 29 12:09:27 np0005601226 ceph-mgr[75527]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 12:09:27 np0005601226 ceph-mgr[75527]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 29 12:09:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:09:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4289107052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:09:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:09:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4289107052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:09:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:09:40.271 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:09:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:09:40.271 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:09:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:09:40.271 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:09:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:09:40
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images']
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:09:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:09:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:09:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:53 np0005601226 podman[241379]: 2026-01-29 17:09:53.875023017 +0000 UTC m=+0.045525458 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 29 12:09:53 np0005601226 podman[241380]: 2026-01-29 17:09:53.926952048 +0000 UTC m=+0.096600075 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 29 12:09:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:09:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:09:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:10:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:10:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:10:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:10:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:10:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:10:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:14 np0005601226 nova_compute[239456]: 2026-01-29 17:10:14.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:14 np0005601226 nova_compute[239456]: 2026-01-29 17:10:14.654 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:10:14 np0005601226 nova_compute[239456]: 2026-01-29 17:10:14.655 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:10:14 np0005601226 nova_compute[239456]: 2026-01-29 17:10:14.655 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:10:14 np0005601226 nova_compute[239456]: 2026-01-29 17:10:14.655 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:10:14 np0005601226 nova_compute[239456]: 2026-01-29 17:10:14.656 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:10:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821574637' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.203 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.331 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.332 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.332 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.332 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.386849) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706615386891, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1903, "num_deletes": 506, "total_data_size": 2712834, "memory_usage": 2753088, "flush_reason": "Manual Compaction"}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.419 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.419 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.436 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706615447233, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2661776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13464, "largest_seqno": 15366, "table_properties": {"data_size": 2653579, "index_size": 4499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 19139, "raw_average_key_size": 18, "raw_value_size": 2635198, "raw_average_value_size": 2546, "num_data_blocks": 205, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769706436, "oldest_key_time": 1769706436, "file_creation_time": 1769706615, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 60463 microseconds, and 5105 cpu microseconds.
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.447304) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2661776 bytes OK
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.447326) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.453287) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.453322) EVENT_LOG_v1 {"time_micros": 1769706615453314, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.453344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2703711, prev total WAL file size 2731912, number of live WAL files 2.
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.453935) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2599KB)], [32(7813KB)]
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706615453975, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10662442, "oldest_snapshot_seqno": -1}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4034 keys, 8547384 bytes, temperature: kUnknown
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706615593211, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8547384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8517646, "index_size": 18556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98532, "raw_average_key_size": 24, "raw_value_size": 8441932, "raw_average_value_size": 2092, "num_data_blocks": 784, "num_entries": 4034, "num_filter_entries": 4034, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769706615, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.593430) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8547384 bytes
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.595400) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.5 rd, 61.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.6 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 5059, records dropped: 1025 output_compression: NoCompression
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.595417) EVENT_LOG_v1 {"time_micros": 1769706615595408, "job": 14, "event": "compaction_finished", "compaction_time_micros": 139315, "compaction_time_cpu_micros": 29192, "output_level": 6, "num_output_files": 1, "total_output_size": 8547384, "num_input_records": 5059, "num_output_records": 4034, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706615595681, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706615596171, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.453826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.596214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.596219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.596220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.596222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:10:15.596223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:10:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541952172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.979 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:10:15 np0005601226 nova_compute[239456]: 2026-01-29 17:10:15.983 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:10:16 np0005601226 nova_compute[239456]: 2026-01-29 17:10:16.000 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:10:16 np0005601226 nova_compute[239456]: 2026-01-29 17:10:16.002 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:10:16 np0005601226 nova_compute[239456]: 2026-01-29 17:10:16.002 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:10:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:10:16.228 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:10:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:10:16.229 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:10:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:10:16.230 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:10:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.002 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.002 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.002 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.051 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.052 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.052 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.052 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.052 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.052 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.053 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 nova_compute[239456]: 2026-01-29 17:10:18.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:10:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:10:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:19 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:10:19 np0005601226 podman[241681]: 2026-01-29 17:10:19.367263109 +0000 UTC m=+0.020524718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:10:19 np0005601226 podman[241681]: 2026-01-29 17:10:19.473547036 +0000 UTC m=+0.126808615 container create a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:10:19 np0005601226 systemd[1]: Started libpod-conmon-a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5.scope.
Jan 29 12:10:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:10:19 np0005601226 podman[241681]: 2026-01-29 17:10:19.58563715 +0000 UTC m=+0.238898749 container init a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:10:19 np0005601226 podman[241681]: 2026-01-29 17:10:19.594158712 +0000 UTC m=+0.247420291 container start a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldstine, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:10:19 np0005601226 intelligent_goldstine[241698]: 167 167
Jan 29 12:10:19 np0005601226 systemd[1]: libpod-a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5.scope: Deactivated successfully.
Jan 29 12:10:19 np0005601226 podman[241681]: 2026-01-29 17:10:19.61430864 +0000 UTC m=+0.267570239 container attach a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldstine, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:10:19 np0005601226 podman[241681]: 2026-01-29 17:10:19.614871035 +0000 UTC m=+0.268132614 container died a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 12:10:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d12840dfd94393d66ff9892babacda4c25668a5b4a7ece89fe7d2da379ec3c55-merged.mount: Deactivated successfully.
Jan 29 12:10:19 np0005601226 podman[241681]: 2026-01-29 17:10:19.718502309 +0000 UTC m=+0.371763878 container remove a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 12:10:19 np0005601226 systemd[1]: libpod-conmon-a22baecc83063172b75a46e54b96034f5d57989d01d2e024597e16911aef81e5.scope: Deactivated successfully.
Jan 29 12:10:19 np0005601226 podman[241722]: 2026-01-29 17:10:19.824002325 +0000 UTC m=+0.020901408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:10:19 np0005601226 podman[241722]: 2026-01-29 17:10:19.920542148 +0000 UTC m=+0.117441211 container create 96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_euclid, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:10:20 np0005601226 systemd[1]: Started libpod-conmon-96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf.scope.
Jan 29 12:10:20 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:10:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10820d6056544c53d2ebd8e9b616d37f0abf34697c04ff2508fc9e2bc1f45c6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10820d6056544c53d2ebd8e9b616d37f0abf34697c04ff2508fc9e2bc1f45c6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10820d6056544c53d2ebd8e9b616d37f0abf34697c04ff2508fc9e2bc1f45c6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10820d6056544c53d2ebd8e9b616d37f0abf34697c04ff2508fc9e2bc1f45c6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10820d6056544c53d2ebd8e9b616d37f0abf34697c04ff2508fc9e2bc1f45c6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:20 np0005601226 podman[241722]: 2026-01-29 17:10:20.077764498 +0000 UTC m=+0.274663581 container init 96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_euclid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:10:20 np0005601226 podman[241722]: 2026-01-29 17:10:20.083795971 +0000 UTC m=+0.280695034 container start 96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_euclid, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:10:20 np0005601226 podman[241722]: 2026-01-29 17:10:20.115113302 +0000 UTC m=+0.312012365 container attach 96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:10:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:20 np0005601226 elated_euclid[241738]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:10:20 np0005601226 elated_euclid[241738]: --> All data devices are unavailable
Jan 29 12:10:20 np0005601226 systemd[1]: libpod-96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf.scope: Deactivated successfully.
Jan 29 12:10:20 np0005601226 podman[241758]: 2026-01-29 17:10:20.565676399 +0000 UTC m=+0.026405767 container died 96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:10:20 np0005601226 systemd[1]: var-lib-containers-storage-overlay-10820d6056544c53d2ebd8e9b616d37f0abf34697c04ff2508fc9e2bc1f45c6c-merged.mount: Deactivated successfully.
Jan 29 12:10:20 np0005601226 podman[241758]: 2026-01-29 17:10:20.65665333 +0000 UTC m=+0.117382678 container remove 96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:10:20 np0005601226 systemd[1]: libpod-conmon-96c421fec203ac79cc745d83f1eb1e76e5c6a2ea82973ccd7384b3f4f312e4bf.scope: Deactivated successfully.
Jan 29 12:10:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:21 np0005601226 podman[241836]: 2026-01-29 17:10:21.069427582 +0000 UTC m=+0.054682766 container create 667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 29 12:10:21 np0005601226 systemd[1]: Started libpod-conmon-667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f.scope.
Jan 29 12:10:21 np0005601226 podman[241836]: 2026-01-29 17:10:21.03400351 +0000 UTC m=+0.019258724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:10:21 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:10:21 np0005601226 podman[241836]: 2026-01-29 17:10:21.177698362 +0000 UTC m=+0.162953566 container init 667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 12:10:21 np0005601226 podman[241836]: 2026-01-29 17:10:21.183081589 +0000 UTC m=+0.168336773 container start 667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcclintock, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 12:10:21 np0005601226 heuristic_mcclintock[241852]: 167 167
Jan 29 12:10:21 np0005601226 systemd[1]: libpod-667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f.scope: Deactivated successfully.
Jan 29 12:10:21 np0005601226 podman[241836]: 2026-01-29 17:10:21.196886344 +0000 UTC m=+0.182141548 container attach 667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcclintock, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:10:21 np0005601226 podman[241836]: 2026-01-29 17:10:21.197608024 +0000 UTC m=+0.182863208 container died 667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcclintock, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:10:21 np0005601226 systemd[1]: var-lib-containers-storage-overlay-91c2f0f7d52ebeeae09f2e756ea809bd32765b8c67d63cf94b02380739767de4-merged.mount: Deactivated successfully.
Jan 29 12:10:21 np0005601226 podman[241836]: 2026-01-29 17:10:21.311494717 +0000 UTC m=+0.296749901 container remove 667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mcclintock, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:10:21 np0005601226 systemd[1]: libpod-conmon-667ce2f083189a3c03fa0e5f0596817f1473544839d3e0c14244270066c58f9f.scope: Deactivated successfully.
Jan 29 12:10:21 np0005601226 podman[241878]: 2026-01-29 17:10:21.492346848 +0000 UTC m=+0.087661991 container create cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lehmann, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:10:21 np0005601226 podman[241878]: 2026-01-29 17:10:21.427013185 +0000 UTC m=+0.022328358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:10:21 np0005601226 systemd[1]: Started libpod-conmon-cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b.scope.
Jan 29 12:10:21 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:10:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40d5ac5d8a22575cc684ead6b469242df9814791088c8ccffe62e273c05033b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40d5ac5d8a22575cc684ead6b469242df9814791088c8ccffe62e273c05033b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40d5ac5d8a22575cc684ead6b469242df9814791088c8ccffe62e273c05033b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:21 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40d5ac5d8a22575cc684ead6b469242df9814791088c8ccffe62e273c05033b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:21 np0005601226 podman[241878]: 2026-01-29 17:10:21.610604181 +0000 UTC m=+0.205919344 container init cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:10:21 np0005601226 podman[241878]: 2026-01-29 17:10:21.618013012 +0000 UTC m=+0.213328155 container start cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:10:21 np0005601226 podman[241878]: 2026-01-29 17:10:21.631491249 +0000 UTC m=+0.226806392 container attach cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]: {
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:    "0": [
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:        {
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "devices": [
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "/dev/loop3"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            ],
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_name": "ceph_lv0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_size": "21470642176",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "name": "ceph_lv0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "tags": {
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cluster_name": "ceph",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.crush_device_class": "",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.encrypted": "0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.objectstore": "bluestore",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osd_id": "0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.type": "block",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.vdo": "0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.with_tpm": "0"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            },
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "type": "block",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "vg_name": "ceph_vg0"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:        }
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:    ],
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:    "1": [
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:        {
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "devices": [
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "/dev/loop4"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            ],
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_name": "ceph_lv1",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_size": "21470642176",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "name": "ceph_lv1",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "tags": {
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cluster_name": "ceph",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.crush_device_class": "",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.encrypted": "0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.objectstore": "bluestore",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osd_id": "1",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.type": "block",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.vdo": "0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.with_tpm": "0"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            },
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "type": "block",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "vg_name": "ceph_vg1"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:        }
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:    ],
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:    "2": [
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:        {
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "devices": [
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "/dev/loop5"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            ],
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_name": "ceph_lv2",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_size": "21470642176",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "name": "ceph_lv2",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "tags": {
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.cluster_name": "ceph",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.crush_device_class": "",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.encrypted": "0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.objectstore": "bluestore",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osd_id": "2",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.type": "block",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.vdo": "0",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:                "ceph.with_tpm": "0"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            },
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "type": "block",
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:            "vg_name": "ceph_vg2"
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:        }
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]:    ]
Jan 29 12:10:21 np0005601226 suspicious_lehmann[241895]: }
Jan 29 12:10:21 np0005601226 systemd[1]: libpod-cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b.scope: Deactivated successfully.
Jan 29 12:10:21 np0005601226 podman[241878]: 2026-01-29 17:10:21.930831768 +0000 UTC m=+0.526146921 container died cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:10:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-40d5ac5d8a22575cc684ead6b469242df9814791088c8ccffe62e273c05033b9-merged.mount: Deactivated successfully.
Jan 29 12:10:22 np0005601226 podman[241878]: 2026-01-29 17:10:22.099735756 +0000 UTC m=+0.695050899 container remove cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 12:10:22 np0005601226 systemd[1]: libpod-conmon-cf748016300e950c431f33f0b371825579aed40856b38207870be0dd3e89592b.scope: Deactivated successfully.
Jan 29 12:10:22 np0005601226 podman[241978]: 2026-01-29 17:10:22.499541756 +0000 UTC m=+0.042554348 container create 4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:10:22 np0005601226 systemd[1]: Started libpod-conmon-4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22.scope.
Jan 29 12:10:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:10:22 np0005601226 podman[241978]: 2026-01-29 17:10:22.571719315 +0000 UTC m=+0.114731807 container init 4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:10:22 np0005601226 podman[241978]: 2026-01-29 17:10:22.481166206 +0000 UTC m=+0.024178708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:10:22 np0005601226 podman[241978]: 2026-01-29 17:10:22.577122292 +0000 UTC m=+0.120134764 container start 4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:10:22 np0005601226 blissful_rubin[241994]: 167 167
Jan 29 12:10:22 np0005601226 systemd[1]: libpod-4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22.scope: Deactivated successfully.
Jan 29 12:10:22 np0005601226 conmon[241994]: conmon 4d8f51f8a928bb6f7c04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22.scope/container/memory.events
Jan 29 12:10:22 np0005601226 podman[241978]: 2026-01-29 17:10:22.585186882 +0000 UTC m=+0.128199374 container attach 4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 12:10:22 np0005601226 podman[241978]: 2026-01-29 17:10:22.585547601 +0000 UTC m=+0.128560083 container died 4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 12:10:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7a02d1b3bb9ef68ccfdf0dba206d91f1c515d9e01c2a8496e9e67b822c102b96-merged.mount: Deactivated successfully.
Jan 29 12:10:22 np0005601226 podman[241978]: 2026-01-29 17:10:22.760706438 +0000 UTC m=+0.303718920 container remove 4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 12:10:22 np0005601226 systemd[1]: libpod-conmon-4d8f51f8a928bb6f7c04a09808b3f031ffbfa837f05e3aee3c6d41e2cd8d8e22.scope: Deactivated successfully.
Jan 29 12:10:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:22 np0005601226 podman[242020]: 2026-01-29 17:10:22.895949462 +0000 UTC m=+0.052430505 container create b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:10:22 np0005601226 systemd[1]: Started libpod-conmon-b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20.scope.
Jan 29 12:10:22 np0005601226 podman[242020]: 2026-01-29 17:10:22.863260884 +0000 UTC m=+0.019741947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:10:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:10:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c01cf027d0c4dc70d350a157a5885a1eb8b0f1537414d96dbbb220497ed05f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c01cf027d0c4dc70d350a157a5885a1eb8b0f1537414d96dbbb220497ed05f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c01cf027d0c4dc70d350a157a5885a1eb8b0f1537414d96dbbb220497ed05f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c01cf027d0c4dc70d350a157a5885a1eb8b0f1537414d96dbbb220497ed05f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:10:23 np0005601226 podman[242020]: 2026-01-29 17:10:23.013808823 +0000 UTC m=+0.170289886 container init b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:10:23 np0005601226 podman[242020]: 2026-01-29 17:10:23.020116764 +0000 UTC m=+0.176597807 container start b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:10:23 np0005601226 podman[242020]: 2026-01-29 17:10:23.02875268 +0000 UTC m=+0.185233743 container attach b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:10:23 np0005601226 lvm[242116]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:10:23 np0005601226 lvm[242114]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:10:23 np0005601226 lvm[242116]: VG ceph_vg1 finished
Jan 29 12:10:23 np0005601226 lvm[242114]: VG ceph_vg0 finished
Jan 29 12:10:23 np0005601226 lvm[242118]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:10:23 np0005601226 lvm[242118]: VG ceph_vg2 finished
Jan 29 12:10:23 np0005601226 optimistic_pasteur[242036]: {}
Jan 29 12:10:23 np0005601226 systemd[1]: libpod-b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20.scope: Deactivated successfully.
Jan 29 12:10:23 np0005601226 systemd[1]: libpod-b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20.scope: Consumed 1.107s CPU time.
Jan 29 12:10:23 np0005601226 podman[242020]: 2026-01-29 17:10:23.806171635 +0000 UTC m=+0.962652698 container died b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:10:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a8c01cf027d0c4dc70d350a157a5885a1eb8b0f1537414d96dbbb220497ed05f-merged.mount: Deactivated successfully.
Jan 29 12:10:23 np0005601226 podman[242020]: 2026-01-29 17:10:23.979993886 +0000 UTC m=+1.136474929 container remove b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:10:24 np0005601226 systemd[1]: libpod-conmon-b54f48190f6e97e034b9f44b0f1685eb9d3ac46f6143cacd1b61d34a4c8c0d20.scope: Deactivated successfully.
Jan 29 12:10:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:10:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:10:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:24 np0005601226 podman[242134]: 2026-01-29 17:10:24.099729387 +0000 UTC m=+0.054008197 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:10:24 np0005601226 podman[242136]: 2026-01-29 17:10:24.148698457 +0000 UTC m=+0.102847614 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 29 12:10:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:25 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:25 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:10:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:10:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5804 writes, 24K keys, 5804 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5804 writes, 917 syncs, 6.33 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 204 writes, 306 keys, 204 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 204 writes, 102 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1d22818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d1d22818d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 29 12:10:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:10:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537634970' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:10:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:10:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537634970' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:10:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:10:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 7017 writes, 29K keys, 7017 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7017 writes, 1285 syncs, 5.46 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f5108558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f5108558d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 29 12:10:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:10:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.3 total, 600.0 interval#012Cumulative writes: 5769 writes, 24K keys, 5769 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5769 writes, 874 syncs, 6.60 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 272 writes, 408 keys, 272 commit groups, 1.0 writes per commit group, ingest: 0.14 MB, 0.00 MB/s#012Interval WAL: 272 writes, 136 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a688dad8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a688dad8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.3 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 29 12:10:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:39 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] Check health
Jan 29 12:10:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:10:40.272 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:10:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:10:40.272 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:10:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:10:40.272 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:10:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:10:40
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'vms']
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:10:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:10:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:10:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:54 np0005601226 podman[242206]: 2026-01-29 17:10:54.903899373 +0000 UTC m=+0.076262842 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:10:54 np0005601226 podman[242207]: 2026-01-29 17:10:54.943930981 +0000 UTC m=+0.116267189 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 29 12:10:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:10:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:10:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:11:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:11:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:11:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:11:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:11:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:11:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:14 np0005601226 nova_compute[239456]: 2026-01-29 17:11:14.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:16 np0005601226 nova_compute[239456]: 2026-01-29 17:11:16.535 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:16 np0005601226 nova_compute[239456]: 2026-01-29 17:11:16.580 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:11:16 np0005601226 nova_compute[239456]: 2026-01-29 17:11:16.581 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:11:16 np0005601226 nova_compute[239456]: 2026-01-29 17:11:16.581 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:11:16 np0005601226 nova_compute[239456]: 2026-01-29 17:11:16.581 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:11:16 np0005601226 nova_compute[239456]: 2026-01-29 17:11:16.581 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:11:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:11:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/634827691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.079 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.203 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.204 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.205 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.205 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.535 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.536 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:11:17 np0005601226 nova_compute[239456]: 2026-01-29 17:11:17.563 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:11:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:11:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1725506042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:11:18 np0005601226 nova_compute[239456]: 2026-01-29 17:11:18.130 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:11:18 np0005601226 nova_compute[239456]: 2026-01-29 17:11:18.135 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:11:18 np0005601226 nova_compute[239456]: 2026-01-29 17:11:18.203 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:11:18 np0005601226 nova_compute[239456]: 2026-01-29 17:11:18.206 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:11:18 np0005601226 nova_compute[239456]: 2026-01-29 17:11:18.206 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:11:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.276 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.276 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.276 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.427 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.428 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.428 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.428 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.429 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.429 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:19 np0005601226 nova_compute[239456]: 2026-01-29 17:11:19.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:20 np0005601226 nova_compute[239456]: 2026-01-29 17:11:20.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:11:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 12:11:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:11:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:11:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:11:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:11:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:11:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:11:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:11:25 np0005601226 podman[242398]: 2026-01-29 17:11:25.266939088 +0000 UTC m=+0.046870714 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 12:11:25 np0005601226 podman[242399]: 2026-01-29 17:11:25.288906154 +0000 UTC m=+0.067206686 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:11:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:25 np0005601226 podman[242477]: 2026-01-29 17:11:25.487455157 +0000 UTC m=+0.017971219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:11:26 np0005601226 podman[242477]: 2026-01-29 17:11:26.124316655 +0000 UTC m=+0.654832687 container create 45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 12:11:26 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:11:26 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:11:26 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:11:26 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:11:26 np0005601226 systemd[1]: Started libpod-conmon-45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6.scope.
Jan 29 12:11:26 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:11:26 np0005601226 podman[242477]: 2026-01-29 17:11:26.449331172 +0000 UTC m=+0.979847234 container init 45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 12:11:26 np0005601226 podman[242477]: 2026-01-29 17:11:26.456544188 +0000 UTC m=+0.987060210 container start 45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:11:26 np0005601226 blissful_keller[242493]: 167 167
Jan 29 12:11:26 np0005601226 systemd[1]: libpod-45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6.scope: Deactivated successfully.
Jan 29 12:11:26 np0005601226 podman[242477]: 2026-01-29 17:11:26.787991411 +0000 UTC m=+1.318507433 container attach 45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:11:26 np0005601226 podman[242477]: 2026-01-29 17:11:26.788469703 +0000 UTC m=+1.318985735 container died 45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 29 12:11:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:11:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay-762104fc4f1a3c1dfc22ebce4715f14e6f96ff398266603eece07b9a1c133fdc-merged.mount: Deactivated successfully.
Jan 29 12:11:27 np0005601226 podman[242477]: 2026-01-29 17:11:27.635405257 +0000 UTC m=+2.165921279 container remove 45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:11:27 np0005601226 systemd[1]: libpod-conmon-45244dfad10bbb02a7e4dba07f2d964fa4598b195a6b380d0e314c509c2c3da6.scope: Deactivated successfully.
Jan 29 12:11:27 np0005601226 podman[242518]: 2026-01-29 17:11:27.738468817 +0000 UTC m=+0.021134655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:11:27 np0005601226 podman[242518]: 2026-01-29 17:11:27.875747675 +0000 UTC m=+0.158413493 container create 6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:11:27 np0005601226 systemd[1]: Started libpod-conmon-6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb.scope.
Jan 29 12:11:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:11:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19363b3e89997502d306071665f1e51625db782d5fe605881fe365d81dd192e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19363b3e89997502d306071665f1e51625db782d5fe605881fe365d81dd192e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19363b3e89997502d306071665f1e51625db782d5fe605881fe365d81dd192e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19363b3e89997502d306071665f1e51625db782d5fe605881fe365d81dd192e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19363b3e89997502d306071665f1e51625db782d5fe605881fe365d81dd192e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:28 np0005601226 podman[242518]: 2026-01-29 17:11:28.041736124 +0000 UTC m=+0.324401942 container init 6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:11:28 np0005601226 podman[242518]: 2026-01-29 17:11:28.04970337 +0000 UTC m=+0.332369178 container start 6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:11:28 np0005601226 podman[242518]: 2026-01-29 17:11:28.100042277 +0000 UTC m=+0.382708135 container attach 6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 12:11:28 np0005601226 trusting_tu[242534]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:11:28 np0005601226 trusting_tu[242534]: --> All data devices are unavailable
Jan 29 12:11:28 np0005601226 systemd[1]: libpod-6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb.scope: Deactivated successfully.
Jan 29 12:11:28 np0005601226 podman[242518]: 2026-01-29 17:11:28.484607443 +0000 UTC m=+0.767273261 container died 6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:11:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Jan 29 12:11:29 np0005601226 systemd[1]: var-lib-containers-storage-overlay-19363b3e89997502d306071665f1e51625db782d5fe605881fe365d81dd192e0-merged.mount: Deactivated successfully.
Jan 29 12:11:29 np0005601226 podman[242518]: 2026-01-29 17:11:29.940447515 +0000 UTC m=+2.223113313 container remove 6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_tu, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:11:29 np0005601226 systemd[1]: libpod-conmon-6c65f615c1537d9d3bf1e90eb9c1558ff4d47161c09ee21cd1c974a4b36a1fdb.scope: Deactivated successfully.
Jan 29 12:11:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:30 np0005601226 podman[242628]: 2026-01-29 17:11:30.292519097 +0000 UTC m=+0.019449779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:11:30 np0005601226 podman[242628]: 2026-01-29 17:11:30.63983381 +0000 UTC m=+0.366764492 container create c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:11:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:11:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3737613251' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:11:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:11:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3737613251' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:11:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 29 12:11:31 np0005601226 systemd[1]: Started libpod-conmon-c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03.scope.
Jan 29 12:11:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:11:31 np0005601226 podman[242628]: 2026-01-29 17:11:31.201087355 +0000 UTC m=+0.928018037 container init c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:11:31 np0005601226 podman[242628]: 2026-01-29 17:11:31.20679404 +0000 UTC m=+0.933724702 container start c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ramanujan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:11:31 np0005601226 eager_ramanujan[242645]: 167 167
Jan 29 12:11:31 np0005601226 systemd[1]: libpod-c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03.scope: Deactivated successfully.
Jan 29 12:11:31 np0005601226 podman[242628]: 2026-01-29 17:11:31.338731153 +0000 UTC m=+1.065661815 container attach c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ramanujan, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 12:11:31 np0005601226 podman[242628]: 2026-01-29 17:11:31.339457323 +0000 UTC m=+1.066387995 container died c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ramanujan, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:11:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0aac6f56aa3325435ad8dee71b3cbd0caa900b890cd6b3d0b96ff06d24143b7a-merged.mount: Deactivated successfully.
Jan 29 12:11:32 np0005601226 podman[242628]: 2026-01-29 17:11:32.273933924 +0000 UTC m=+2.000864586 container remove c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:11:32 np0005601226 systemd[1]: libpod-conmon-c3815826c9cc8d368ba310db3a79a6590d5972f41bb1b68bad5b9d1092035b03.scope: Deactivated successfully.
Jan 29 12:11:32 np0005601226 podman[242668]: 2026-01-29 17:11:32.465555938 +0000 UTC m=+0.113814521 container create f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:11:32 np0005601226 podman[242668]: 2026-01-29 17:11:32.374250669 +0000 UTC m=+0.022509282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:11:32 np0005601226 systemd[1]: Started libpod-conmon-f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2.scope.
Jan 29 12:11:32 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:11:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f055e1ac2ba36dd4eb8e4da6f56858ac0b25804a5cf0e7428f8c6d00e6f8308b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f055e1ac2ba36dd4eb8e4da6f56858ac0b25804a5cf0e7428f8c6d00e6f8308b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f055e1ac2ba36dd4eb8e4da6f56858ac0b25804a5cf0e7428f8c6d00e6f8308b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f055e1ac2ba36dd4eb8e4da6f56858ac0b25804a5cf0e7428f8c6d00e6f8308b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:32 np0005601226 podman[242668]: 2026-01-29 17:11:32.784152012 +0000 UTC m=+0.432410595 container init f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:11:32 np0005601226 podman[242668]: 2026-01-29 17:11:32.79107466 +0000 UTC m=+0.439333243 container start f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:11:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Jan 29 12:11:33 np0005601226 podman[242668]: 2026-01-29 17:11:33.037643077 +0000 UTC m=+0.685901680 container attach f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]: {
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:    "0": [
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:        {
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "devices": [
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "/dev/loop3"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            ],
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_name": "ceph_lv0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_size": "21470642176",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "name": "ceph_lv0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "tags": {
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cluster_name": "ceph",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.crush_device_class": "",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.encrypted": "0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.objectstore": "bluestore",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osd_id": "0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.type": "block",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.vdo": "0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.with_tpm": "0"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            },
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "type": "block",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "vg_name": "ceph_vg0"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:        }
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:    ],
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:    "1": [
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:        {
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "devices": [
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "/dev/loop4"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            ],
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_name": "ceph_lv1",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_size": "21470642176",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "name": "ceph_lv1",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "tags": {
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cluster_name": "ceph",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.crush_device_class": "",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.encrypted": "0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.objectstore": "bluestore",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osd_id": "1",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.type": "block",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.vdo": "0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.with_tpm": "0"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            },
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "type": "block",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "vg_name": "ceph_vg1"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:        }
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:    ],
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:    "2": [
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:        {
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "devices": [
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "/dev/loop5"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            ],
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_name": "ceph_lv2",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_size": "21470642176",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "name": "ceph_lv2",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "tags": {
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.cluster_name": "ceph",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.crush_device_class": "",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.encrypted": "0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.objectstore": "bluestore",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osd_id": "2",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.type": "block",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.vdo": "0",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:                "ceph.with_tpm": "0"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            },
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "type": "block",
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:            "vg_name": "ceph_vg2"
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:        }
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]:    ]
Jan 29 12:11:33 np0005601226 gifted_hodgkin[242685]: }
Jan 29 12:11:33 np0005601226 systemd[1]: libpod-f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2.scope: Deactivated successfully.
Jan 29 12:11:33 np0005601226 podman[242668]: 2026-01-29 17:11:33.072790042 +0000 UTC m=+0.721048645 container died f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:11:34 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f055e1ac2ba36dd4eb8e4da6f56858ac0b25804a5cf0e7428f8c6d00e6f8308b-merged.mount: Deactivated successfully.
Jan 29 12:11:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 29 12:11:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:36 np0005601226 podman[242668]: 2026-01-29 17:11:36.225907732 +0000 UTC m=+3.874166325 container remove f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 29 12:11:36 np0005601226 systemd[1]: libpod-conmon-f06b8b6a365bb00bec3272b7e296baef232f83a3fd9c3cd9fe1f77d990a838e2.scope: Deactivated successfully.
Jan 29 12:11:36 np0005601226 podman[242769]: 2026-01-29 17:11:36.588726797 +0000 UTC m=+0.025470202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:11:36 np0005601226 podman[242769]: 2026-01-29 17:11:36.812847314 +0000 UTC m=+0.249590689 container create 7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:11:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 29 12:11:37 np0005601226 systemd[1]: Started libpod-conmon-7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451.scope.
Jan 29 12:11:37 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:11:37 np0005601226 podman[242769]: 2026-01-29 17:11:37.177043367 +0000 UTC m=+0.613786742 container init 7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_blackburn, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 12:11:37 np0005601226 podman[242769]: 2026-01-29 17:11:37.18194134 +0000 UTC m=+0.618684705 container start 7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:11:37 np0005601226 cranky_blackburn[242785]: 167 167
Jan 29 12:11:37 np0005601226 systemd[1]: libpod-7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451.scope: Deactivated successfully.
Jan 29 12:11:37 np0005601226 conmon[242785]: conmon 7f5250344b58960a0a05 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451.scope/container/memory.events
Jan 29 12:11:37 np0005601226 podman[242769]: 2026-01-29 17:11:37.325398925 +0000 UTC m=+0.762142300 container attach 7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_blackburn, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:11:37 np0005601226 podman[242769]: 2026-01-29 17:11:37.325868098 +0000 UTC m=+0.762611453 container died 7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_blackburn, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:11:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1e478732adcb6d9c019144e1441d0ba303d959a1d9b86c53af728f1fd4693232-merged.mount: Deactivated successfully.
Jan 29 12:11:38 np0005601226 podman[242769]: 2026-01-29 17:11:38.099736947 +0000 UTC m=+1.536480302 container remove 7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:11:38 np0005601226 systemd[1]: libpod-conmon-7f5250344b58960a0a05401ad2ee125cbcba5a9807fadd18ab5ffb21e1692451.scope: Deactivated successfully.
Jan 29 12:11:38 np0005601226 podman[242809]: 2026-01-29 17:11:38.226013727 +0000 UTC m=+0.021618398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:11:38 np0005601226 podman[242809]: 2026-01-29 17:11:38.335772399 +0000 UTC m=+0.131377050 container create 004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:11:38 np0005601226 systemd[1]: Started libpod-conmon-004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b.scope.
Jan 29 12:11:38 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:11:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8a2367908cdcdae0f3c44f67361965afe20b1c16a7b3703e8397cc656a16e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8a2367908cdcdae0f3c44f67361965afe20b1c16a7b3703e8397cc656a16e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8a2367908cdcdae0f3c44f67361965afe20b1c16a7b3703e8397cc656a16e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e8a2367908cdcdae0f3c44f67361965afe20b1c16a7b3703e8397cc656a16e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:11:38 np0005601226 podman[242809]: 2026-01-29 17:11:38.701558503 +0000 UTC m=+0.497163204 container init 004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:11:38 np0005601226 podman[242809]: 2026-01-29 17:11:38.706149238 +0000 UTC m=+0.501753919 container start 004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:11:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 29 12:11:38 np0005601226 podman[242809]: 2026-01-29 17:11:38.961477954 +0000 UTC m=+0.757082605 container attach 004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:11:39 np0005601226 lvm[242906]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:11:39 np0005601226 lvm[242908]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:11:39 np0005601226 lvm[242906]: VG ceph_vg0 finished
Jan 29 12:11:39 np0005601226 lvm[242908]: VG ceph_vg1 finished
Jan 29 12:11:39 np0005601226 lvm[242910]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:11:39 np0005601226 lvm[242910]: VG ceph_vg2 finished
Jan 29 12:11:39 np0005601226 sad_shannon[242829]: {}
Jan 29 12:11:39 np0005601226 systemd[1]: libpod-004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b.scope: Deactivated successfully.
Jan 29 12:11:39 np0005601226 podman[242809]: 2026-01-29 17:11:39.448645596 +0000 UTC m=+1.244250257 container died 004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:11:39 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8e8a2367908cdcdae0f3c44f67361965afe20b1c16a7b3703e8397cc656a16e3-merged.mount: Deactivated successfully.
Jan 29 12:11:39 np0005601226 podman[242809]: 2026-01-29 17:11:39.861624571 +0000 UTC m=+1.657229222 container remove 004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 29 12:11:39 np0005601226 systemd[1]: libpod-conmon-004b27c15a98d1a149784d9dd5e937d996751bd568164f266673a4fd4a60b46b.scope: Deactivated successfully.
Jan 29 12:11:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:11:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:11:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:11:40.273 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:11:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:11:40.274 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:11:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:11:40.274 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:11:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:11:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:11:40
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:11:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:11:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Jan 29 12:11:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:11:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:11:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 24 op/s
Jan 29 12:11:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Jan 29 12:11:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Jan 29 12:11:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 29 12:11:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:11:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:11:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 29 12:11:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:11:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Jan 29 12:11:55 np0005601226 podman[242952]: 2026-01-29 17:11:55.903135182 +0000 UTC m=+0.072713635 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 29 12:11:55 np0005601226 podman[242954]: 2026-01-29 17:11:55.92589591 +0000 UTC m=+0.095323319 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 29 12:11:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Jan 29 12:11:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Jan 29 12:12:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 29 12:12:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:03.944948) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706723945016, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1143, "num_deletes": 251, "total_data_size": 1760377, "memory_usage": 1793488, "flush_reason": "Manual Compaction"}
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706723992929, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1713701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15367, "largest_seqno": 16509, "table_properties": {"data_size": 1708229, "index_size": 2867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11619, "raw_average_key_size": 19, "raw_value_size": 1697262, "raw_average_value_size": 2857, "num_data_blocks": 132, "num_entries": 594, "num_filter_entries": 594, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769706615, "oldest_key_time": 1769706615, "file_creation_time": 1769706723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 48033 microseconds, and 3949 cpu microseconds.
Jan 29 12:12:03 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:03.992984) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1713701 bytes OK
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:03.993007) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.096404) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.096456) EVENT_LOG_v1 {"time_micros": 1769706724096447, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.096481) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1755101, prev total WAL file size 1755101, number of live WAL files 2.
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.097375) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1673KB)], [35(8347KB)]
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706724097450, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10261085, "oldest_snapshot_seqno": -1}
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4114 keys, 8434838 bytes, temperature: kUnknown
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706724218471, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8434838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8404767, "index_size": 18697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100724, "raw_average_key_size": 24, "raw_value_size": 8327785, "raw_average_value_size": 2024, "num_data_blocks": 788, "num_entries": 4114, "num_filter_entries": 4114, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769706724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.218741) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8434838 bytes
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.278194) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.7 rd, 69.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(10.9) write-amplify(4.9) OK, records in: 4628, records dropped: 514 output_compression: NoCompression
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.278269) EVENT_LOG_v1 {"time_micros": 1769706724278254, "job": 16, "event": "compaction_finished", "compaction_time_micros": 121091, "compaction_time_cpu_micros": 13470, "output_level": 6, "num_output_files": 1, "total_output_size": 8434838, "num_input_records": 4628, "num_output_records": 4114, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706724278659, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706724279545, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.097162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.279616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.279622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.279624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.279626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:12:04 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:12:04.279628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:12:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:12:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:12:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:12:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:12:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:12:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:12:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:14 np0005601226 nova_compute[239456]: 2026-01-29 17:12:14.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:14 np0005601226 nova_compute[239456]: 2026-01-29 17:12:14.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 29 12:12:14 np0005601226 nova_compute[239456]: 2026-01-29 17:12:14.645 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 29 12:12:14 np0005601226 nova_compute[239456]: 2026-01-29 17:12:14.646 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:14 np0005601226 nova_compute[239456]: 2026-01-29 17:12:14.646 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 29 12:12:14 np0005601226 nova_compute[239456]: 2026-01-29 17:12:14.673 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:15 np0005601226 nova_compute[239456]: 2026-01-29 17:12:15.697 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:15 np0005601226 nova_compute[239456]: 2026-01-29 17:12:15.754 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:12:15 np0005601226 nova_compute[239456]: 2026-01-29 17:12:15.755 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:12:15 np0005601226 nova_compute[239456]: 2026-01-29 17:12:15.756 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:12:15 np0005601226 nova_compute[239456]: 2026-01-29 17:12:15.756 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:12:15 np0005601226 nova_compute[239456]: 2026-01-29 17:12:15.756 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:12:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:12:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711098359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.272 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.386 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.387 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.388 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.388 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.674 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.674 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:12:16 np0005601226 nova_compute[239456]: 2026-01-29 17:12:16.692 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:12:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:12:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1021064188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:12:17 np0005601226 nova_compute[239456]: 2026-01-29 17:12:17.176 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:12:17 np0005601226 nova_compute[239456]: 2026-01-29 17:12:17.180 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:12:17 np0005601226 nova_compute[239456]: 2026-01-29 17:12:17.247 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:12:17 np0005601226 nova_compute[239456]: 2026-01-29 17:12:17.249 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:12:17 np0005601226 nova_compute[239456]: 2026-01-29 17:12:17.249 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:12:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.156 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.157 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.157 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.172 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.172 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.172 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.172 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:12:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:19 np0005601226 nova_compute[239456]: 2026-01-29 17:12:19.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:20 np0005601226 nova_compute[239456]: 2026-01-29 17:12:20.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:20 np0005601226 nova_compute[239456]: 2026-01-29 17:12:20.602 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:22 np0005601226 nova_compute[239456]: 2026-01-29 17:12:22.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:12:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:26 np0005601226 podman[243042]: 2026-01-29 17:12:26.876986173 +0000 UTC m=+0.039975995 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 29 12:12:26 np0005601226 podman[243043]: 2026-01-29 17:12:26.914850301 +0000 UTC m=+0.079003864 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 29 12:12:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:12:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/387560087' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:12:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:12:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/387560087' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:12:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:12:40.274 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:12:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:12:40.274 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:12:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:12:40.274 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:12:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:12:40
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'volumes']
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:12:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:12:41 np0005601226 podman[243182]: 2026-01-29 17:12:41.308178254 +0000 UTC m=+0.128724938 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 12:12:41 np0005601226 podman[243182]: 2026-01-29 17:12:41.399663169 +0000 UTC m=+0.220209843 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 12:12:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:12:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:12:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:12:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:43 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:12:43 np0005601226 podman[243506]: 2026-01-29 17:12:43.197177305 +0000 UTC m=+0.082823955 container create 5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:12:43 np0005601226 podman[243506]: 2026-01-29 17:12:43.13498706 +0000 UTC m=+0.020633740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:12:43 np0005601226 systemd[1]: Started libpod-conmon-5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0.scope.
Jan 29 12:12:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:12:43 np0005601226 podman[243506]: 2026-01-29 17:12:43.307290687 +0000 UTC m=+0.192937367 container init 5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:12:43 np0005601226 podman[243506]: 2026-01-29 17:12:43.314797717 +0000 UTC m=+0.200444377 container start 5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:12:43 np0005601226 admiring_gauss[243523]: 167 167
Jan 29 12:12:43 np0005601226 systemd[1]: libpod-5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0.scope: Deactivated successfully.
Jan 29 12:12:43 np0005601226 podman[243506]: 2026-01-29 17:12:43.3765072 +0000 UTC m=+0.262153880 container attach 5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:12:43 np0005601226 podman[243506]: 2026-01-29 17:12:43.376949821 +0000 UTC m=+0.262596471 container died 5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 12:12:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-da3fa862d07feeb02a60de3226457ecfff0f9ac1f0e6852afe26a8d0009516c7-merged.mount: Deactivated successfully.
Jan 29 12:12:43 np0005601226 podman[243506]: 2026-01-29 17:12:43.464139152 +0000 UTC m=+0.349785802 container remove 5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 12:12:43 np0005601226 systemd[1]: libpod-conmon-5ff7eb3baa4a17947e491409dc7e979af8d0486272efd3193ceaed8c59350ce0.scope: Deactivated successfully.
Jan 29 12:12:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:43 np0005601226 podman[243549]: 2026-01-29 17:12:43.599168196 +0000 UTC m=+0.038952588 container create 70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:12:43 np0005601226 systemd[1]: Started libpod-conmon-70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764.scope.
Jan 29 12:12:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:12:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d14cad9f6d2c025609b2db4f9c7585169799f434fa9dfb33a57874e367cbcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d14cad9f6d2c025609b2db4f9c7585169799f434fa9dfb33a57874e367cbcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d14cad9f6d2c025609b2db4f9c7585169799f434fa9dfb33a57874e367cbcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d14cad9f6d2c025609b2db4f9c7585169799f434fa9dfb33a57874e367cbcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d14cad9f6d2c025609b2db4f9c7585169799f434fa9dfb33a57874e367cbcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:43 np0005601226 podman[243549]: 2026-01-29 17:12:43.578743712 +0000 UTC m=+0.018528134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:12:43 np0005601226 podman[243549]: 2026-01-29 17:12:43.701639894 +0000 UTC m=+0.141424306 container init 70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:12:43 np0005601226 podman[243549]: 2026-01-29 17:12:43.707628793 +0000 UTC m=+0.147413185 container start 70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:12:43 np0005601226 podman[243549]: 2026-01-29 17:12:43.717545597 +0000 UTC m=+0.157330019 container attach 70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:12:44 np0005601226 fervent_einstein[243565]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:12:44 np0005601226 fervent_einstein[243565]: --> All data devices are unavailable
Jan 29 12:12:44 np0005601226 systemd[1]: libpod-70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764.scope: Deactivated successfully.
Jan 29 12:12:44 np0005601226 conmon[243565]: conmon 70431a2231d33260d342 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764.scope/container/memory.events
Jan 29 12:12:44 np0005601226 podman[243549]: 2026-01-29 17:12:44.112972493 +0000 UTC m=+0.552756885 container died 70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:12:44 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c2d14cad9f6d2c025609b2db4f9c7585169799f434fa9dfb33a57874e367cbcb-merged.mount: Deactivated successfully.
Jan 29 12:12:44 np0005601226 podman[243549]: 2026-01-29 17:12:44.289280535 +0000 UTC m=+0.729064927 container remove 70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:12:44 np0005601226 systemd[1]: libpod-conmon-70431a2231d33260d34243d66ec6a49fdbcb3387d6d2b3f9e33084fad1206764.scope: Deactivated successfully.
Jan 29 12:12:44 np0005601226 podman[243659]: 2026-01-29 17:12:44.642061545 +0000 UTC m=+0.015351289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:12:44 np0005601226 podman[243659]: 2026-01-29 17:12:44.927137794 +0000 UTC m=+0.300427518 container create d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_villani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 12:12:44 np0005601226 systemd[1]: Started libpod-conmon-d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee.scope.
Jan 29 12:12:45 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:12:45 np0005601226 podman[243659]: 2026-01-29 17:12:45.225349751 +0000 UTC m=+0.598639475 container init d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 12:12:45 np0005601226 podman[243659]: 2026-01-29 17:12:45.230324394 +0000 UTC m=+0.603614118 container start d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_villani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:12:45 np0005601226 blissful_villani[243674]: 167 167
Jan 29 12:12:45 np0005601226 systemd[1]: libpod-d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee.scope: Deactivated successfully.
Jan 29 12:12:45 np0005601226 podman[243659]: 2026-01-29 17:12:45.288987595 +0000 UTC m=+0.662277519 container attach d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 12:12:45 np0005601226 podman[243659]: 2026-01-29 17:12:45.289339385 +0000 UTC m=+0.662629109 container died d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_villani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 12:12:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:45 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c3c9a60d5ba6d33c0e25f37a4288ae442c0f41cdf44aedfefcf92914c61cae80-merged.mount: Deactivated successfully.
Jan 29 12:12:46 np0005601226 podman[243659]: 2026-01-29 17:12:46.409582815 +0000 UTC m=+1.782872569 container remove d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_villani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:12:46 np0005601226 systemd[1]: libpod-conmon-d422aae87825ac3b025be9c6c7e8ae13a46bab1e7cf3477b947a075808bb2aee.scope: Deactivated successfully.
Jan 29 12:12:46 np0005601226 podman[243698]: 2026-01-29 17:12:46.5608053 +0000 UTC m=+0.056129695 container create fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:12:46 np0005601226 podman[243698]: 2026-01-29 17:12:46.529113176 +0000 UTC m=+0.024437591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:12:46 np0005601226 systemd[1]: Started libpod-conmon-fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1.scope.
Jan 29 12:12:46 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:12:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa56ddbad908464474968bfe04946d92049f5cdb9792dd93037fc867db314e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa56ddbad908464474968bfe04946d92049f5cdb9792dd93037fc867db314e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa56ddbad908464474968bfe04946d92049f5cdb9792dd93037fc867db314e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faa56ddbad908464474968bfe04946d92049f5cdb9792dd93037fc867db314e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:46 np0005601226 podman[243698]: 2026-01-29 17:12:46.69341577 +0000 UTC m=+0.188740185 container init fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 29 12:12:46 np0005601226 podman[243698]: 2026-01-29 17:12:46.69831213 +0000 UTC m=+0.193636525 container start fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rhodes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 12:12:46 np0005601226 podman[243698]: 2026-01-29 17:12:46.754771972 +0000 UTC m=+0.250096397 container attach fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rhodes, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]: {
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:    "0": [
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:        {
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "devices": [
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "/dev/loop3"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            ],
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_name": "ceph_lv0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_size": "21470642176",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "name": "ceph_lv0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "tags": {
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cluster_name": "ceph",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.crush_device_class": "",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.encrypted": "0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.objectstore": "bluestore",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osd_id": "0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.type": "block",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.vdo": "0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.with_tpm": "0"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            },
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "type": "block",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "vg_name": "ceph_vg0"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:        }
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:    ],
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:    "1": [
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:        {
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "devices": [
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "/dev/loop4"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            ],
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_name": "ceph_lv1",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_size": "21470642176",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "name": "ceph_lv1",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "tags": {
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cluster_name": "ceph",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.crush_device_class": "",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.encrypted": "0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.objectstore": "bluestore",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osd_id": "1",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.type": "block",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.vdo": "0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.with_tpm": "0"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            },
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "type": "block",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "vg_name": "ceph_vg1"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:        }
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:    ],
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:    "2": [
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:        {
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "devices": [
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "/dev/loop5"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            ],
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_name": "ceph_lv2",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_size": "21470642176",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "name": "ceph_lv2",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "tags": {
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.cluster_name": "ceph",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.crush_device_class": "",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.encrypted": "0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.objectstore": "bluestore",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osd_id": "2",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.type": "block",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.vdo": "0",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:                "ceph.with_tpm": "0"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            },
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "type": "block",
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:            "vg_name": "ceph_vg2"
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:        }
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]:    ]
Jan 29 12:12:46 np0005601226 priceless_rhodes[243715]: }
Jan 29 12:12:46 np0005601226 systemd[1]: libpod-fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1.scope: Deactivated successfully.
Jan 29 12:12:46 np0005601226 podman[243698]: 2026-01-29 17:12:46.967436663 +0000 UTC m=+0.462761058 container died fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rhodes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:12:47 np0005601226 systemd[1]: var-lib-containers-storage-overlay-faa56ddbad908464474968bfe04946d92049f5cdb9792dd93037fc867db314e4-merged.mount: Deactivated successfully.
Jan 29 12:12:47 np0005601226 podman[243698]: 2026-01-29 17:12:47.273072309 +0000 UTC m=+0.768396704 container remove fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 12:12:47 np0005601226 systemd[1]: libpod-conmon-fd1fc7a1bcbecada7b145a974f8ffb195ac37a62da1f890d76b0f539882753f1.scope: Deactivated successfully.
Jan 29 12:12:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:47 np0005601226 podman[243802]: 2026-01-29 17:12:47.632328131 +0000 UTC m=+0.020183998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:12:47 np0005601226 podman[243802]: 2026-01-29 17:12:47.744522178 +0000 UTC m=+0.132378045 container create f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elgamal, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:12:47 np0005601226 systemd[1]: Started libpod-conmon-f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72.scope.
Jan 29 12:12:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:12:47 np0005601226 podman[243802]: 2026-01-29 17:12:47.901380193 +0000 UTC m=+0.289236050 container init f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 12:12:47 np0005601226 podman[243802]: 2026-01-29 17:12:47.912698014 +0000 UTC m=+0.300553861 container start f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:12:47 np0005601226 stoic_elgamal[243818]: 167 167
Jan 29 12:12:47 np0005601226 systemd[1]: libpod-f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72.scope: Deactivated successfully.
Jan 29 12:12:47 np0005601226 conmon[243818]: conmon f31428125b962f6ff1df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72.scope/container/memory.events
Jan 29 12:12:47 np0005601226 podman[243802]: 2026-01-29 17:12:47.9568509 +0000 UTC m=+0.344706777 container attach f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elgamal, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:12:47 np0005601226 podman[243802]: 2026-01-29 17:12:47.95724084 +0000 UTC m=+0.345096687 container died f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:12:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e7fd948b30c56e49ab5c860c22a04efe5c7edb5597c1dbd6cb2841ac0b90bc54-merged.mount: Deactivated successfully.
Jan 29 12:12:48 np0005601226 podman[243802]: 2026-01-29 17:12:48.388307175 +0000 UTC m=+0.776163012 container remove f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:12:48 np0005601226 systemd[1]: libpod-conmon-f31428125b962f6ff1df44e09eb1db0d1028323cf1c112b940056bd041f6bc72.scope: Deactivated successfully.
Jan 29 12:12:48 np0005601226 podman[243844]: 2026-01-29 17:12:48.534788714 +0000 UTC m=+0.065143155 container create 61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Jan 29 12:12:48 np0005601226 podman[243844]: 2026-01-29 17:12:48.488597005 +0000 UTC m=+0.018951476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:12:48 np0005601226 systemd[1]: Started libpod-conmon-61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d.scope.
Jan 29 12:12:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:12:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e229bb17cd70e5a1e7f1eadf085def4666972fa047a307249aacaa479b82f75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e229bb17cd70e5a1e7f1eadf085def4666972fa047a307249aacaa479b82f75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e229bb17cd70e5a1e7f1eadf085def4666972fa047a307249aacaa479b82f75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e229bb17cd70e5a1e7f1eadf085def4666972fa047a307249aacaa479b82f75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:12:48 np0005601226 podman[243844]: 2026-01-29 17:12:48.771591108 +0000 UTC m=+0.301945579 container init 61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hopper, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 12:12:48 np0005601226 podman[243844]: 2026-01-29 17:12:48.776499458 +0000 UTC m=+0.306853899 container start 61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:12:48 np0005601226 podman[243844]: 2026-01-29 17:12:48.909853768 +0000 UTC m=+0.440208239 container attach 61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 12:12:49 np0005601226 lvm[243940]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:12:49 np0005601226 lvm[243940]: VG ceph_vg0 finished
Jan 29 12:12:49 np0005601226 lvm[243941]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:12:49 np0005601226 lvm[243941]: VG ceph_vg1 finished
Jan 29 12:12:49 np0005601226 lvm[243943]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:12:49 np0005601226 lvm[243943]: VG ceph_vg2 finished
Jan 29 12:12:49 np0005601226 suspicious_hopper[243861]: {}
Jan 29 12:12:49 np0005601226 systemd[1]: libpod-61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d.scope: Deactivated successfully.
Jan 29 12:12:49 np0005601226 systemd[1]: libpod-61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d.scope: Consumed 1.017s CPU time.
Jan 29 12:12:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:49 np0005601226 podman[243946]: 2026-01-29 17:12:49.538977244 +0000 UTC m=+0.022263554 container died 61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hopper, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:12:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1e229bb17cd70e5a1e7f1eadf085def4666972fa047a307249aacaa479b82f75-merged.mount: Deactivated successfully.
Jan 29 12:12:49 np0005601226 podman[243946]: 2026-01-29 17:12:49.875968544 +0000 UTC m=+0.359254824 container remove 61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hopper, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 12:12:49 np0005601226 systemd[1]: libpod-conmon-61112c464ad92648f2e8707b6d97b5f2f1f382df4e43cfb6c324bf574de43a3d.scope: Deactivated successfully.
Jan 29 12:12:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:12:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:12:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:51 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:51 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.8121643970586627e-06 of space, bias 4.0, pg target 0.002174597276470395 quantized to 16 (current 16)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:12:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:12:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:12:57 np0005601226 podman[243987]: 2026-01-29 17:12:57.877026055 +0000 UTC m=+0.047408563 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 29 12:12:57 np0005601226 podman[243988]: 2026-01-29 17:12:57.901296821 +0000 UTC m=+0.070878467 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:12:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:13:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:13:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:13:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:13:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:13:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:13:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:13:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:13:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:13:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:13:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:17 np0005601226 nova_compute[239456]: 2026-01-29 17:13:17.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:17 np0005601226 nova_compute[239456]: 2026-01-29 17:13:17.620 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:17 np0005601226 nova_compute[239456]: 2026-01-29 17:13:17.652 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:13:17 np0005601226 nova_compute[239456]: 2026-01-29 17:13:17.652 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:13:17 np0005601226 nova_compute[239456]: 2026-01-29 17:13:17.653 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:13:17 np0005601226 nova_compute[239456]: 2026-01-29 17:13:17.653 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:13:17 np0005601226 nova_compute[239456]: 2026-01-29 17:13:17.653 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:13:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:13:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592121332' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.199 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.326 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.327 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.327 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.327 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.537 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.538 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.618 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing inventories for resource provider 79259295-532c-4a51-8f50-027529735b0c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.715 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating ProviderTree inventory for provider 79259295-532c-4a51-8f50-027529735b0c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.715 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.740 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing aggregate associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.773 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing trait associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, traits: HW_CPU_X86_SSE4A,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 29 12:13:18 np0005601226 nova_compute[239456]: 2026-01-29 17:13:18.791 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:13:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:13:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855425367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:13:19 np0005601226 nova_compute[239456]: 2026-01-29 17:13:19.459 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:13:19 np0005601226 nova_compute[239456]: 2026-01-29 17:13:19.463 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:13:19 np0005601226 nova_compute[239456]: 2026-01-29 17:13:19.499 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:13:19 np0005601226 nova_compute[239456]: 2026-01-29 17:13:19.501 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:13:19 np0005601226 nova_compute[239456]: 2026-01-29 17:13:19.501 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:13:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.484 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.485 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.633 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.633 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:20 np0005601226 nova_compute[239456]: 2026-01-29 17:13:20.633 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:13:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:21 np0005601226 nova_compute[239456]: 2026-01-29 17:13:21.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:22 np0005601226 nova_compute[239456]: 2026-01-29 17:13:22.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:22 np0005601226 nova_compute[239456]: 2026-01-29 17:13:22.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:24 np0005601226 nova_compute[239456]: 2026-01-29 17:13:24.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:13:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 29 12:13:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 29 12:13:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 29 12:13:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 29 12:13:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 29 12:13:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 29 12:13:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 29 12:13:28 np0005601226 podman[244070]: 2026-01-29 17:13:28.896733209 +0000 UTC m=+0.059904295 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 12:13:28 np0005601226 podman[244071]: 2026-01-29 17:13:28.909029356 +0000 UTC m=+0.073734303 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 29 12:13:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 255 B/s wr, 0 op/s
Jan 29 12:13:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 29 12:13:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 29 12:13:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 29 12:13:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:13:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:13:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3759812187' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:13:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:13:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3759812187' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:13:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 341 B/s wr, 1 op/s
Jan 29 12:13:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 29 12:13:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 29 12:13:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 29 12:13:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 29 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.8 MiB/s wr, 63 op/s
Jan 29 12:13:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:13:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 29 12:13:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 5.9 MiB/s wr, 54 op/s
Jan 29 12:13:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 29 12:13:35 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 29 12:13:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.2 MiB/s wr, 48 op/s
Jan 29 12:13:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Jan 29 12:13:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:13:40.274 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:13:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:13:40.275 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:13:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:13:40.275 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:13:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:13:40
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'images', '.mgr', 'vms', 'cephfs.cephfs.meta']
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:13:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:13:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 38 op/s
Jan 29 12:13:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s wr, 0 op/s
Jan 29 12:13:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:13:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:13:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:13:50 np0005601226 podman[244256]: 2026-01-29 17:13:50.972297991 +0000 UTC m=+0.048041930 container create e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_benz, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:13:51 np0005601226 systemd[1]: Started libpod-conmon-e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904.scope.
Jan 29 12:13:51 np0005601226 podman[244256]: 2026-01-29 17:13:50.943280529 +0000 UTC m=+0.019024478 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:13:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:13:51 np0005601226 podman[244256]: 2026-01-29 17:13:51.080899752 +0000 UTC m=+0.156643691 container init e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_benz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:13:51 np0005601226 podman[244256]: 2026-01-29 17:13:51.085862814 +0000 UTC m=+0.161606763 container start e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 12:13:51 np0005601226 recursing_benz[244272]: 167 167
Jan 29 12:13:51 np0005601226 systemd[1]: libpod-e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904.scope: Deactivated successfully.
Jan 29 12:13:51 np0005601226 conmon[244272]: conmon e188821dd87950d0810e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904.scope/container/memory.events
Jan 29 12:13:51 np0005601226 podman[244256]: 2026-01-29 17:13:51.094948516 +0000 UTC m=+0.170692465 container attach e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:13:51 np0005601226 podman[244256]: 2026-01-29 17:13:51.095328046 +0000 UTC m=+0.171071995 container died e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_benz, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:13:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-677b94f9653df1c8e11a0e428d5741c9cca4dd0a7de541edc61dac19b7ae5ac6-merged.mount: Deactivated successfully.
Jan 29 12:13:51 np0005601226 podman[244256]: 2026-01-29 17:13:51.168335549 +0000 UTC m=+0.244079488 container remove e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:13:51 np0005601226 systemd[1]: libpod-conmon-e188821dd87950d0810e4d343454b2d1ab5eddc5603b1113e4c9a07e592a1904.scope: Deactivated successfully.
Jan 29 12:13:51 np0005601226 podman[244300]: 2026-01-29 17:13:51.290897092 +0000 UTC m=+0.039781721 container create ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mcclintock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:13:51 np0005601226 systemd[1]: Started libpod-conmon-ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499.scope.
Jan 29 12:13:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:13:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ef2d760854e99b5fe46f6c4cf85df3e7df58e21af9fe5c79342486cb8fbcfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ef2d760854e99b5fe46f6c4cf85df3e7df58e21af9fe5c79342486cb8fbcfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ef2d760854e99b5fe46f6c4cf85df3e7df58e21af9fe5c79342486cb8fbcfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ef2d760854e99b5fe46f6c4cf85df3e7df58e21af9fe5c79342486cb8fbcfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8ef2d760854e99b5fe46f6c4cf85df3e7df58e21af9fe5c79342486cb8fbcfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:51 np0005601226 podman[244300]: 2026-01-29 17:13:51.26978517 +0000 UTC m=+0.018669829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:13:51 np0005601226 podman[244300]: 2026-01-29 17:13:51.37611203 +0000 UTC m=+0.124996659 container init ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659023462270568 of space, bias 1.0, pg target 0.19977070386811704 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.7956612421726198e-06 of space, bias 4.0, pg target 0.002154793490607144 quantized to 16 (current 16)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 podman[244300]: 2026-01-29 17:13:51.381122323 +0000 UTC m=+0.130006952 container start ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mcclintock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:13:51 np0005601226 podman[244300]: 2026-01-29 17:13:51.388262443 +0000 UTC m=+0.137147102 container attach ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mcclintock, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:13:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:13:51 np0005601226 gifted_mcclintock[244317]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:13:51 np0005601226 gifted_mcclintock[244317]: --> All data devices are unavailable
Jan 29 12:13:51 np0005601226 systemd[1]: libpod-ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499.scope: Deactivated successfully.
Jan 29 12:13:51 np0005601226 podman[244300]: 2026-01-29 17:13:51.768445023 +0000 UTC m=+0.517329652 container died ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 12:13:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 29 12:13:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b8ef2d760854e99b5fe46f6c4cf85df3e7df58e21af9fe5c79342486cb8fbcfd-merged.mount: Deactivated successfully.
Jan 29 12:13:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 29 12:13:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 29 12:13:51 np0005601226 podman[244300]: 2026-01-29 17:13:51.819290326 +0000 UTC m=+0.568174955 container remove ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 12:13:51 np0005601226 systemd[1]: libpod-conmon-ee45eefc38dfeec76938826a9dc6a1dd1a7578859cba3df6723a2adbd176d499.scope: Deactivated successfully.
Jan 29 12:13:52 np0005601226 podman[244409]: 2026-01-29 17:13:52.212991346 +0000 UTC m=+0.088242010 container create 34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:13:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:13:52.237 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:13:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:13:52.239 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:13:52 np0005601226 podman[244409]: 2026-01-29 17:13:52.143166348 +0000 UTC m=+0.018417052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:13:52 np0005601226 systemd[1]: Started libpod-conmon-34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369.scope.
Jan 29 12:13:52 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:13:52 np0005601226 podman[244409]: 2026-01-29 17:13:52.336990587 +0000 UTC m=+0.212241281 container init 34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:13:52 np0005601226 podman[244409]: 2026-01-29 17:13:52.342018361 +0000 UTC m=+0.217269025 container start 34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 12:13:52 np0005601226 vibrant_hofstadter[244425]: 167 167
Jan 29 12:13:52 np0005601226 systemd[1]: libpod-34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369.scope: Deactivated successfully.
Jan 29 12:13:52 np0005601226 podman[244409]: 2026-01-29 17:13:52.347247189 +0000 UTC m=+0.222497873 container attach 34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:13:52 np0005601226 podman[244409]: 2026-01-29 17:13:52.347796015 +0000 UTC m=+0.223046699 container died 34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hofstadter, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:13:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ceff4faa6df95aa3b7eeb96c238ffa62a2e716ba490806353f4a5f9afd066e7f-merged.mount: Deactivated successfully.
Jan 29 12:13:52 np0005601226 podman[244409]: 2026-01-29 17:13:52.395899985 +0000 UTC m=+0.271150649 container remove 34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:13:52 np0005601226 systemd[1]: libpod-conmon-34efcea1b07abb6ee3c65503e1a9f5aa240891dfa7fc4a7f1ba0da30c964c369.scope: Deactivated successfully.
Jan 29 12:13:52 np0005601226 podman[244448]: 2026-01-29 17:13:52.507067553 +0000 UTC m=+0.020142166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:13:52 np0005601226 podman[244448]: 2026-01-29 17:13:52.673234937 +0000 UTC m=+0.186309520 container create 0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:13:52 np0005601226 systemd[1]: Started libpod-conmon-0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0.scope.
Jan 29 12:13:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 29 12:13:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 29 12:13:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 29 12:13:52 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:13:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76760a3a90a38bb73af727ada754edc05ab50e10f9540301aaefe8d4b55a1435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76760a3a90a38bb73af727ada754edc05ab50e10f9540301aaefe8d4b55a1435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76760a3a90a38bb73af727ada754edc05ab50e10f9540301aaefe8d4b55a1435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76760a3a90a38bb73af727ada754edc05ab50e10f9540301aaefe8d4b55a1435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:52 np0005601226 podman[244448]: 2026-01-29 17:13:52.869889682 +0000 UTC m=+0.382964315 container init 0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 12:13:52 np0005601226 podman[244448]: 2026-01-29 17:13:52.878776468 +0000 UTC m=+0.391851051 container start 0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_rhodes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:13:52 np0005601226 podman[244448]: 2026-01-29 17:13:52.884668365 +0000 UTC m=+0.397742948 container attach 0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_rhodes, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]: {
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:    "0": [
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:        {
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "devices": [
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "/dev/loop3"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            ],
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_name": "ceph_lv0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_size": "21470642176",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "name": "ceph_lv0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "tags": {
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cluster_name": "ceph",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.crush_device_class": "",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.encrypted": "0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.objectstore": "bluestore",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osd_id": "0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.type": "block",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.vdo": "0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.with_tpm": "0"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            },
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "type": "block",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "vg_name": "ceph_vg0"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:        }
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:    ],
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:    "1": [
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:        {
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "devices": [
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "/dev/loop4"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            ],
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_name": "ceph_lv1",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_size": "21470642176",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "name": "ceph_lv1",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "tags": {
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cluster_name": "ceph",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.crush_device_class": "",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.encrypted": "0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.objectstore": "bluestore",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osd_id": "1",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.type": "block",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.vdo": "0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.with_tpm": "0"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            },
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "type": "block",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "vg_name": "ceph_vg1"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:        }
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:    ],
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:    "2": [
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:        {
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "devices": [
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "/dev/loop5"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            ],
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_name": "ceph_lv2",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_size": "21470642176",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "name": "ceph_lv2",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "tags": {
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.cluster_name": "ceph",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.crush_device_class": "",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.encrypted": "0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.objectstore": "bluestore",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osd_id": "2",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.type": "block",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.vdo": "0",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:                "ceph.with_tpm": "0"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            },
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "type": "block",
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:            "vg_name": "ceph_vg2"
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:        }
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]:    ]
Jan 29 12:13:53 np0005601226 clever_rhodes[244465]: }
Jan 29 12:13:53 np0005601226 systemd[1]: libpod-0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0.scope: Deactivated successfully.
Jan 29 12:13:53 np0005601226 podman[244448]: 2026-01-29 17:13:53.157408055 +0000 UTC m=+0.670482638 container died 0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_rhodes, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:13:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay-76760a3a90a38bb73af727ada754edc05ab50e10f9540301aaefe8d4b55a1435-merged.mount: Deactivated successfully.
Jan 29 12:13:53 np0005601226 podman[244448]: 2026-01-29 17:13:53.471396652 +0000 UTC m=+0.984471235 container remove 0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 12:13:53 np0005601226 systemd[1]: libpod-conmon-0d0b9c48ca4936923fb6b3681203948c062aedbb3e4d1c17f3659ca1b2e3e0d0.scope: Deactivated successfully.
Jan 29 12:13:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 511 B/s wr, 0 op/s
Jan 29 12:13:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 29 12:13:53 np0005601226 podman[244548]: 2026-01-29 17:13:53.901561102 +0000 UTC m=+0.074138325 container create 94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ritchie, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:13:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 29 12:13:53 np0005601226 podman[244548]: 2026-01-29 17:13:53.845912811 +0000 UTC m=+0.018490054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:13:53 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 29 12:13:53 np0005601226 systemd[1]: Started libpod-conmon-94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3.scope.
Jan 29 12:13:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:13:54 np0005601226 podman[244548]: 2026-01-29 17:13:54.056390244 +0000 UTC m=+0.228967497 container init 94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ritchie, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 12:13:54 np0005601226 podman[244548]: 2026-01-29 17:13:54.062294691 +0000 UTC m=+0.234871914 container start 94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ritchie, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:13:54 np0005601226 blissful_ritchie[244564]: 167 167
Jan 29 12:13:54 np0005601226 systemd[1]: libpod-94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3.scope: Deactivated successfully.
Jan 29 12:13:54 np0005601226 podman[244548]: 2026-01-29 17:13:54.09608681 +0000 UTC m=+0.268664023 container attach 94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ritchie, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:13:54 np0005601226 podman[244548]: 2026-01-29 17:13:54.096625725 +0000 UTC m=+0.269202968 container died 94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:13:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9f1aa9dd602d16df318d8b23a45d3fed05a45719180f19f4a55d5eb429a01569-merged.mount: Deactivated successfully.
Jan 29 12:13:54 np0005601226 podman[244548]: 2026-01-29 17:13:54.418424001 +0000 UTC m=+0.591001224 container remove 94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:13:54 np0005601226 systemd[1]: libpod-conmon-94b42971fe657e8dc9c766aec858d42de7cd3de4997b009576a83e61b0931fd3.scope: Deactivated successfully.
Jan 29 12:13:54 np0005601226 podman[244587]: 2026-01-29 17:13:54.527963736 +0000 UTC m=+0.025114990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:13:54 np0005601226 podman[244587]: 2026-01-29 17:13:54.676470589 +0000 UTC m=+0.173621863 container create 36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:13:54 np0005601226 systemd[1]: Started libpod-conmon-36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2.scope.
Jan 29 12:13:54 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:13:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cb3b7646b8a17f538334bfae74884cbf85187ac3363627c58e10c76822f23b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cb3b7646b8a17f538334bfae74884cbf85187ac3363627c58e10c76822f23b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cb3b7646b8a17f538334bfae74884cbf85187ac3363627c58e10c76822f23b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cb3b7646b8a17f538334bfae74884cbf85187ac3363627c58e10c76822f23b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:13:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 29 12:13:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 29 12:13:55 np0005601226 podman[244587]: 2026-01-29 17:13:55.073946139 +0000 UTC m=+0.571097393 container init 36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:13:55 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 29 12:13:55 np0005601226 podman[244587]: 2026-01-29 17:13:55.079671391 +0000 UTC m=+0.576822615 container start 36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:13:55 np0005601226 podman[244587]: 2026-01-29 17:13:55.10254301 +0000 UTC m=+0.599694244 container attach 36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:13:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:13:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 2.0 KiB/s wr, 4 op/s
Jan 29 12:13:55 np0005601226 lvm[244681]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:13:55 np0005601226 lvm[244681]: VG ceph_vg0 finished
Jan 29 12:13:55 np0005601226 lvm[244682]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:13:55 np0005601226 lvm[244682]: VG ceph_vg1 finished
Jan 29 12:13:55 np0005601226 lvm[244684]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:13:55 np0005601226 lvm[244684]: VG ceph_vg2 finished
Jan 29 12:13:55 np0005601226 agitated_gates[244603]: {}
Jan 29 12:13:55 np0005601226 systemd[1]: libpod-36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2.scope: Deactivated successfully.
Jan 29 12:13:55 np0005601226 podman[244587]: 2026-01-29 17:13:55.776942631 +0000 UTC m=+1.274093865 container died 36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 12:13:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay-13cb3b7646b8a17f538334bfae74884cbf85187ac3363627c58e10c76822f23b-merged.mount: Deactivated successfully.
Jan 29 12:13:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 29 12:13:56 np0005601226 podman[244587]: 2026-01-29 17:13:56.279729725 +0000 UTC m=+1.776880979 container remove 36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 12:13:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:13:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 29 12:13:56 np0005601226 systemd[1]: libpod-conmon-36b7fca230040fb791b73df48c90fdf2d1798d7fda059aba302af02e47f7a3f2.scope: Deactivated successfully.
Jan 29 12:13:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 29 12:13:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:13:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:13:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:13:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:13:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:13:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.7 KiB/s wr, 4 op/s
Jan 29 12:13:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.8 KiB/s wr, 47 op/s
Jan 29 12:13:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 29 12:13:59 np0005601226 podman[244727]: 2026-01-29 17:13:59.895368916 +0000 UTC m=+0.054963923 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:13:59 np0005601226 podman[244728]: 2026-01-29 17:13:59.948970784 +0000 UTC m=+0.108463469 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 29 12:14:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 29 12:14:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 29 12:14:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 29 12:14:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 29 12:14:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 29 12:14:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.2 KiB/s wr, 45 op/s
Jan 29 12:14:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:14:02.241 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:14:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 29 12:14:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Jan 29 12:14:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 29 12:14:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 29 12:14:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.0 KiB/s wr, 66 op/s
Jan 29 12:14:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723278634' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723278634' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Jan 29 12:14:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/436392936' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/436392936' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 29 12:14:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.6 KiB/s wr, 108 op/s
Jan 29 12:14:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 29 12:14:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4257345370' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4257345370' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:14:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:14:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:14:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:14:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:14:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 29 12:14:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 29 12:14:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 5.1 KiB/s wr, 103 op/s
Jan 29 12:14:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1561288268' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1561288268' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2417153436' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2417153436' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.4 KiB/s wr, 86 op/s
Jan 29 12:14:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 4.6 KiB/s wr, 117 op/s
Jan 29 12:14:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 KiB/s wr, 51 op/s
Jan 29 12:14:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.781 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.781 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.781 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.781 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:14:19 np0005601226 nova_compute[239456]: 2026-01-29 17:14:19.782 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:14:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:14:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/25838753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:14:20 np0005601226 nova_compute[239456]: 2026-01-29 17:14:20.438 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:14:20 np0005601226 nova_compute[239456]: 2026-01-29 17:14:20.572 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:14:20 np0005601226 nova_compute[239456]: 2026-01-29 17:14:20.573 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:14:20 np0005601226 nova_compute[239456]: 2026-01-29 17:14:20.573 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:14:20 np0005601226 nova_compute[239456]: 2026-01-29 17:14:20.573 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:14:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 29 12:14:21 np0005601226 nova_compute[239456]: 2026-01-29 17:14:21.362 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:14:21 np0005601226 nova_compute[239456]: 2026-01-29 17:14:21.362 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:14:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 29 12:14:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Jan 29 12:14:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 29 12:14:21 np0005601226 nova_compute[239456]: 2026-01-29 17:14:21.965 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:14:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:14:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207410964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:14:22 np0005601226 nova_compute[239456]: 2026-01-29 17:14:22.472 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:14:22 np0005601226 nova_compute[239456]: 2026-01-29 17:14:22.477 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:14:22 np0005601226 nova_compute[239456]: 2026-01-29 17:14:22.531 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:14:22 np0005601226 nova_compute[239456]: 2026-01-29 17:14:22.532 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:14:22 np0005601226 nova_compute[239456]: 2026-01-29 17:14:22.532 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.534 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.535 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.535 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.535 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:14:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.576 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.577 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.577 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.577 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:23 np0005601226 nova_compute[239456]: 2026-01-29 17:14:23.577 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:14:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 29 12:14:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 29 12:14:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 29 12:14:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.1 KiB/s wr, 14 op/s
Jan 29 12:14:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 29 12:14:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 29 12:14:25 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 29 12:14:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:26 np0005601226 nova_compute[239456]: 2026-01-29 17:14:26.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:14:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149162833' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149162833' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068446716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068446716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 17 op/s
Jan 29 12:14:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 29 12:14:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 29 12:14:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 29 12:14:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 29 12:14:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 4.3 KiB/s wr, 87 op/s
Jan 29 12:14:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 29 12:14:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 29 12:14:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177449935' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177449935' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 29 12:14:30 np0005601226 podman[244820]: 2026-01-29 17:14:30.869945065 +0000 UTC m=+0.044983877 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:14:30 np0005601226 podman[244821]: 2026-01-29 17:14:30.889662852 +0000 UTC m=+0.062581607 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 29 12:14:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 29 12:14:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 29 12:14:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.0 KiB/s wr, 73 op/s
Jan 29 12:14:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 29 12:14:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 29 12:14:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 6.0 KiB/s wr, 97 op/s
Jan 29 12:14:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 29 12:14:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 29 12:14:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 29 12:14:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 29 12:14:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.1 KiB/s wr, 56 op/s
Jan 29 12:14:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 29 12:14:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 29 12:14:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 29 12:14:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 29 12:14:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 29 12:14:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.8 KiB/s wr, 52 op/s
Jan 29 12:14:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 29 12:14:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4174728871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4174728871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1447016272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1447016272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 5.7 KiB/s wr, 172 op/s
Jan 29 12:14:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 29 12:14:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 29 12:14:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 29 12:14:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:14:40.275 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:14:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:14:40.276 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:14:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:14:40.276 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:14:40
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:14:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 29 12:14:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 29 12:14:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:14:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:14:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 29 12:14:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 29 12:14:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 29 12:14:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 5.7 KiB/s wr, 215 op/s
Jan 29 12:14:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 29 12:14:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 29 12:14:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 29 12:14:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 29 12:14:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 29 12:14:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 29 12:14:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 29 12:14:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.9 KiB/s wr, 68 op/s
Jan 29 12:14:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 29 12:14:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 29 12:14:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 29 12:14:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.2 KiB/s wr, 55 op/s
Jan 29 12:14:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 29 12:14:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 29 12:14:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 29 12:14:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1291810175' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1291810175' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 4.0 KiB/s wr, 90 op/s
Jan 29 12:14:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 29 12:14:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 29 12:14:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.1920951768244555e-06 of space, bias 1.0, pg target 0.0009576285530473366 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659280712765828 of space, bias 1.0, pg target 0.19977842138297486 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2466169594398753e-06 of space, bias 4.0, pg target 0.0014959403513278503 quantized to 16 (current 16)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:14:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 29 12:14:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 29 12:14:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.2 KiB/s wr, 58 op/s
Jan 29 12:14:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 29 12:14:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.2 KiB/s wr, 65 op/s
Jan 29 12:14:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 29 12:14:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 29 12:14:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 29 12:14:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Jan 29 12:14:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:14:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 29 12:14:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 29 12:14:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939050437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939050437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:14:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:14:57.114 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:14:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:14:57.115 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:14:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:14:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:14:58 np0005601226 podman[245010]: 2026-01-29 17:14:57.96168668 +0000 UTC m=+0.017155718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:14:58 np0005601226 podman[245010]: 2026-01-29 17:14:58.103482415 +0000 UTC m=+0.158951423 container create 3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_haslett, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 12:14:58 np0005601226 systemd[1]: Started libpod-conmon-3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2.scope.
Jan 29 12:14:58 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:14:58 np0005601226 podman[245010]: 2026-01-29 17:14:58.329088254 +0000 UTC m=+0.384557292 container init 3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_haslett, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 12:14:58 np0005601226 podman[245010]: 2026-01-29 17:14:58.338503051 +0000 UTC m=+0.393972069 container start 3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:14:58 np0005601226 brave_haslett[245027]: 167 167
Jan 29 12:14:58 np0005601226 systemd[1]: libpod-3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2.scope: Deactivated successfully.
Jan 29 12:14:58 np0005601226 podman[245010]: 2026-01-29 17:14:58.352860622 +0000 UTC m=+0.408329650 container attach 3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:14:58 np0005601226 podman[245010]: 2026-01-29 17:14:58.354294431 +0000 UTC m=+0.409763469 container died 3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:14:58 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fc76e3571f1b248312cf553dafc14db5017a79b83c16335b711712e6bd750505-merged.mount: Deactivated successfully.
Jan 29 12:14:58 np0005601226 podman[245010]: 2026-01-29 17:14:58.523677778 +0000 UTC m=+0.579146796 container remove 3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 12:14:58 np0005601226 systemd[1]: libpod-conmon-3ddc60945a43cd9a5ace8c3302d60f554e377e122889e337e9308278c4af60a2.scope: Deactivated successfully.
Jan 29 12:14:58 np0005601226 podman[245050]: 2026-01-29 17:14:58.650935087 +0000 UTC m=+0.052596234 container create e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_fermat, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 12:14:58 np0005601226 systemd[1]: Started libpod-conmon-e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0.scope.
Jan 29 12:14:58 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:14:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ab806084c5921fccaf74f3ee8d5178018c655adfd4fe61db15b65557980d33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:14:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ab806084c5921fccaf74f3ee8d5178018c655adfd4fe61db15b65557980d33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:14:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ab806084c5921fccaf74f3ee8d5178018c655adfd4fe61db15b65557980d33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:14:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ab806084c5921fccaf74f3ee8d5178018c655adfd4fe61db15b65557980d33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:14:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82ab806084c5921fccaf74f3ee8d5178018c655adfd4fe61db15b65557980d33/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:14:58 np0005601226 podman[245050]: 2026-01-29 17:14:58.617325731 +0000 UTC m=+0.018986898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:14:58 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:14:58 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:14:58 np0005601226 podman[245050]: 2026-01-29 17:14:58.776093699 +0000 UTC m=+0.177754866 container init e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_fermat, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 12:14:58 np0005601226 podman[245050]: 2026-01-29 17:14:58.780863069 +0000 UTC m=+0.182524216 container start e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_fermat, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:14:58 np0005601226 podman[245050]: 2026-01-29 17:14:58.804663787 +0000 UTC m=+0.206324964 container attach e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:14:59 np0005601226 quizzical_fermat[245067]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:14:59 np0005601226 quizzical_fermat[245067]: --> All data devices are unavailable
Jan 29 12:14:59 np0005601226 systemd[1]: libpod-e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0.scope: Deactivated successfully.
Jan 29 12:14:59 np0005601226 podman[245087]: 2026-01-29 17:14:59.195679115 +0000 UTC m=+0.021157257 container died e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 12:14:59 np0005601226 systemd[1]: var-lib-containers-storage-overlay-82ab806084c5921fccaf74f3ee8d5178018c655adfd4fe61db15b65557980d33-merged.mount: Deactivated successfully.
Jan 29 12:14:59 np0005601226 podman[245087]: 2026-01-29 17:14:59.500802852 +0000 UTC m=+0.326280974 container remove e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_fermat, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:14:59 np0005601226 systemd[1]: libpod-conmon-e0eb67f2cd52ff131a5d41324a500b5ebfba8ff46b0495907d8b47afb59151f0.scope: Deactivated successfully.
Jan 29 12:14:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 KiB/s wr, 52 op/s
Jan 29 12:15:00 np0005601226 podman[245163]: 2026-01-29 17:15:00.019770578 +0000 UTC m=+0.116057394 container create 8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feistel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 12:15:00 np0005601226 podman[245163]: 2026-01-29 17:14:59.925809507 +0000 UTC m=+0.022096353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:15:00 np0005601226 systemd[1]: Started libpod-conmon-8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9.scope.
Jan 29 12:15:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:15:00 np0005601226 podman[245163]: 2026-01-29 17:15:00.122629271 +0000 UTC m=+0.218916087 container init 8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 12:15:00 np0005601226 podman[245163]: 2026-01-29 17:15:00.131101972 +0000 UTC m=+0.227388778 container start 8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feistel, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:15:00 np0005601226 compassionate_feistel[245179]: 167 167
Jan 29 12:15:00 np0005601226 systemd[1]: libpod-8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9.scope: Deactivated successfully.
Jan 29 12:15:00 np0005601226 podman[245163]: 2026-01-29 17:15:00.195245671 +0000 UTC m=+0.291532507 container attach 8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:15:00 np0005601226 podman[245163]: 2026-01-29 17:15:00.196445913 +0000 UTC m=+0.292732729 container died 8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:15:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-167fa269399b255995c6885ddd525260b77eaf2add3b7153bd55ebf79fc39793-merged.mount: Deactivated successfully.
Jan 29 12:15:00 np0005601226 podman[245163]: 2026-01-29 17:15:00.387421959 +0000 UTC m=+0.483708775 container remove 8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:15:00 np0005601226 systemd[1]: libpod-conmon-8b90ac05f1d195a54d86feb4aebb57e2007ab850b5f61dc4d535d10422d579c9.scope: Deactivated successfully.
Jan 29 12:15:00 np0005601226 podman[245206]: 2026-01-29 17:15:00.539981077 +0000 UTC m=+0.070412300 container create 18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rhodes, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:15:00 np0005601226 podman[245206]: 2026-01-29 17:15:00.49164138 +0000 UTC m=+0.022072623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:15:00 np0005601226 systemd[1]: Started libpod-conmon-18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b.scope.
Jan 29 12:15:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:15:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d1ac974cb4a1219ae306d68350c55303f0d1a54ded4d132734cd7b3981b2c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d1ac974cb4a1219ae306d68350c55303f0d1a54ded4d132734cd7b3981b2c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d1ac974cb4a1219ae306d68350c55303f0d1a54ded4d132734cd7b3981b2c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d1ac974cb4a1219ae306d68350c55303f0d1a54ded4d132734cd7b3981b2c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:00 np0005601226 podman[245206]: 2026-01-29 17:15:00.686283925 +0000 UTC m=+0.216715158 container init 18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 12:15:00 np0005601226 podman[245206]: 2026-01-29 17:15:00.690955292 +0000 UTC m=+0.221386515 container start 18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rhodes, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:15:00 np0005601226 podman[245206]: 2026-01-29 17:15:00.697520151 +0000 UTC m=+0.227951374 container attach 18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]: {
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:    "0": [
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:        {
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "devices": [
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "/dev/loop3"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            ],
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_name": "ceph_lv0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_size": "21470642176",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "name": "ceph_lv0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "tags": {
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cluster_name": "ceph",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.crush_device_class": "",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.encrypted": "0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.objectstore": "bluestore",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osd_id": "0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.type": "block",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.vdo": "0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.with_tpm": "0"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            },
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "type": "block",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "vg_name": "ceph_vg0"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:        }
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:    ],
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:    "1": [
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:        {
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "devices": [
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "/dev/loop4"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            ],
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_name": "ceph_lv1",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_size": "21470642176",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "name": "ceph_lv1",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "tags": {
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cluster_name": "ceph",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.crush_device_class": "",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.encrypted": "0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.objectstore": "bluestore",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osd_id": "1",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.type": "block",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.vdo": "0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.with_tpm": "0"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            },
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "type": "block",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "vg_name": "ceph_vg1"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:        }
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:    ],
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:    "2": [
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:        {
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "devices": [
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "/dev/loop5"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            ],
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_name": "ceph_lv2",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_size": "21470642176",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "name": "ceph_lv2",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "tags": {
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.cluster_name": "ceph",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.crush_device_class": "",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.encrypted": "0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.objectstore": "bluestore",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osd_id": "2",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.type": "block",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.vdo": "0",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:                "ceph.with_tpm": "0"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            },
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "type": "block",
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:            "vg_name": "ceph_vg2"
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:        }
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]:    ]
Jan 29 12:15:00 np0005601226 youthful_rhodes[245223]: }
Jan 29 12:15:00 np0005601226 systemd[1]: libpod-18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b.scope: Deactivated successfully.
Jan 29 12:15:00 np0005601226 podman[245206]: 2026-01-29 17:15:00.959336708 +0000 UTC m=+0.489768021 container died 18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rhodes, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:15:01 np0005601226 systemd[1]: var-lib-containers-storage-overlay-58d1ac974cb4a1219ae306d68350c55303f0d1a54ded4d132734cd7b3981b2c5-merged.mount: Deactivated successfully.
Jan 29 12:15:01 np0005601226 podman[245206]: 2026-01-29 17:15:01.086956366 +0000 UTC m=+0.617387589 container remove 18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rhodes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 12:15:01 np0005601226 systemd[1]: libpod-conmon-18e55f48a45631eb4a60e3af5e8aa61ba23c5801308c51eae847726b15a8bf4b.scope: Deactivated successfully.
Jan 29 12:15:01 np0005601226 podman[245232]: 2026-01-29 17:15:01.124422418 +0000 UTC m=+0.137347675 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:15:01 np0005601226 podman[245240]: 2026-01-29 17:15:01.186676204 +0000 UTC m=+0.193861954 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 29 12:15:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 29 12:15:01 np0005601226 podman[245351]: 2026-01-29 17:15:01.535367998 +0000 UTC m=+0.077245946 container create 9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_spence, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 12:15:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.9 KiB/s wr, 46 op/s
Jan 29 12:15:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 29 12:15:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 29 12:15:01 np0005601226 podman[245351]: 2026-01-29 17:15:01.479654931 +0000 UTC m=+0.021532899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:15:01 np0005601226 systemd[1]: Started libpod-conmon-9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96.scope.
Jan 29 12:15:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:15:01 np0005601226 podman[245351]: 2026-01-29 17:15:01.64987767 +0000 UTC m=+0.191755638 container init 9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_spence, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:15:01 np0005601226 podman[245351]: 2026-01-29 17:15:01.65649235 +0000 UTC m=+0.198370298 container start 9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 12:15:01 np0005601226 charming_spence[245367]: 167 167
Jan 29 12:15:01 np0005601226 systemd[1]: libpod-9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96.scope: Deactivated successfully.
Jan 29 12:15:01 np0005601226 podman[245351]: 2026-01-29 17:15:01.681337788 +0000 UTC m=+0.223215736 container attach 9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_spence, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:15:01 np0005601226 podman[245351]: 2026-01-29 17:15:01.681614375 +0000 UTC m=+0.223492323 container died 9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_spence, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:15:01 np0005601226 systemd[1]: var-lib-containers-storage-overlay-bc00ad6f927df362a5da9d608790824d63734861c76b74b8dc97e96edb624b1c-merged.mount: Deactivated successfully.
Jan 29 12:15:01 np0005601226 podman[245351]: 2026-01-29 17:15:01.801372809 +0000 UTC m=+0.343250757 container remove 9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_spence, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:15:01 np0005601226 systemd[1]: libpod-conmon-9e5b6bdbda6b4b721a08f3f05461a21e796cff3667b37f0d3b74ea9c2bc63a96.scope: Deactivated successfully.
Jan 29 12:15:01 np0005601226 podman[245393]: 2026-01-29 17:15:01.929903673 +0000 UTC m=+0.049446249 container create 338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 29 12:15:01 np0005601226 systemd[1]: Started libpod-conmon-338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95.scope.
Jan 29 12:15:01 np0005601226 podman[245393]: 2026-01-29 17:15:01.903048141 +0000 UTC m=+0.022590737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:15:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:15:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7be93ffb8c2f8e25d3103797989e27e6b64de8e8eee729d2e1675d49e1e19c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7be93ffb8c2f8e25d3103797989e27e6b64de8e8eee729d2e1675d49e1e19c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7be93ffb8c2f8e25d3103797989e27e6b64de8e8eee729d2e1675d49e1e19c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e7be93ffb8c2f8e25d3103797989e27e6b64de8e8eee729d2e1675d49e1e19c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:15:02 np0005601226 podman[245393]: 2026-01-29 17:15:02.032235002 +0000 UTC m=+0.151777598 container init 338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 12:15:02 np0005601226 podman[245393]: 2026-01-29 17:15:02.040627561 +0000 UTC m=+0.160170147 container start 338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_brown, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:15:02 np0005601226 podman[245393]: 2026-01-29 17:15:02.048805354 +0000 UTC m=+0.168347930 container attach 338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:15:02 np0005601226 lvm[245489]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:15:02 np0005601226 lvm[245489]: VG ceph_vg1 finished
Jan 29 12:15:02 np0005601226 lvm[245488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:15:02 np0005601226 lvm[245488]: VG ceph_vg0 finished
Jan 29 12:15:02 np0005601226 lvm[245491]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:15:02 np0005601226 lvm[245491]: VG ceph_vg2 finished
Jan 29 12:15:02 np0005601226 gallant_brown[245410]: {}
Jan 29 12:15:02 np0005601226 systemd[1]: libpod-338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95.scope: Deactivated successfully.
Jan 29 12:15:02 np0005601226 systemd[1]: libpod-338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95.scope: Consumed 1.130s CPU time.
Jan 29 12:15:02 np0005601226 podman[245393]: 2026-01-29 17:15:02.83195099 +0000 UTC m=+0.951493566 container died 338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:15:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1e7be93ffb8c2f8e25d3103797989e27e6b64de8e8eee729d2e1675d49e1e19c-merged.mount: Deactivated successfully.
Jan 29 12:15:02 np0005601226 podman[245393]: 2026-01-29 17:15:02.89176725 +0000 UTC m=+1.011309836 container remove 338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:15:02 np0005601226 systemd[1]: libpod-conmon-338a453172cd3aa10d77e99664f6a987ebb902422cd699c4dad23468a9fd2b95.scope: Deactivated successfully.
Jan 29 12:15:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:15:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:15:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:15:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:15:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 511 B/s wr, 23 op/s
Jan 29 12:15:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:15:03 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:15:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:15:05.117 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:15:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 455 B/s wr, 20 op/s
Jan 29 12:15:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 409 B/s wr, 18 op/s
Jan 29 12:15:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:15:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:15:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:15:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:15:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:15:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:15:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.224982) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706911225077, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2250, "num_deletes": 258, "total_data_size": 3537563, "memory_usage": 3585520, "flush_reason": "Manual Compaction"}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706911259774, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3449721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16510, "largest_seqno": 18759, "table_properties": {"data_size": 3439288, "index_size": 6735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21164, "raw_average_key_size": 20, "raw_value_size": 3418481, "raw_average_value_size": 3328, "num_data_blocks": 298, "num_entries": 1027, "num_filter_entries": 1027, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769706725, "oldest_key_time": 1769706725, "file_creation_time": 1769706911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 34831 microseconds, and 5241 cpu microseconds.
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.259821) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3449721 bytes OK
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.259840) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.270409) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.270451) EVENT_LOG_v1 {"time_micros": 1769706911270443, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.270475) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3528023, prev total WAL file size 3528023, number of live WAL files 2.
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.271224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3368KB)], [38(8237KB)]
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706911271303, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11884559, "oldest_snapshot_seqno": -1}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4616 keys, 10076119 bytes, temperature: kUnknown
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706911372261, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 10076119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10040700, "index_size": 22765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 112024, "raw_average_key_size": 24, "raw_value_size": 9952877, "raw_average_value_size": 2156, "num_data_blocks": 958, "num_entries": 4616, "num_filter_entries": 4616, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769706911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.372471) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 10076119 bytes
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.378998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.6 rd, 99.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 5141, records dropped: 525 output_compression: NoCompression
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.379039) EVENT_LOG_v1 {"time_micros": 1769706911379022, "job": 18, "event": "compaction_finished", "compaction_time_micros": 101023, "compaction_time_cpu_micros": 16102, "output_level": 6, "num_output_files": 1, "total_output_size": 10076119, "num_input_records": 5141, "num_output_records": 4616, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706911379572, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706911380645, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.271110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.380858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.380864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.380867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.380870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:11.380872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:15:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:15:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:15:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040990073' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3040990073' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail
Jan 29 12:15:18 np0005601226 nova_compute[239456]: 2026-01-29 17:15:18.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3428466361' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3428466361' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 11 op/s
Jan 29 12:15:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1719743011' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1719743011' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 938 B/s wr, 11 op/s
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.635 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.636 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.636 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.636 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:15:21 np0005601226 nova_compute[239456]: 2026-01-29 17:15:21.636 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:15:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:15:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/478776761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.151 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.284 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.285 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5079MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.286 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.286 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.348 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.349 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.363 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:15:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:15:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888102221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.895 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.900 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.917 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.919 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:15:22 np0005601226 nova_compute[239456]: 2026-01-29 17:15:22.920 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:15:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 938 B/s wr, 27 op/s
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.920 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.921 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.921 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.980 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.981 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.982 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.982 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:23 np0005601226 nova_compute[239456]: 2026-01-29 17:15:23.982 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:15:24 np0005601226 nova_compute[239456]: 2026-01-29 17:15:24.660 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 29 12:15:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 29 12:15:27 np0005601226 nova_compute[239456]: 2026-01-29 17:15:27.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:15:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/678417565' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/678417565' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Jan 29 12:15:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789967559' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1789967559' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 852 B/s wr, 30 op/s
Jan 29 12:15:31 np0005601226 podman[245576]: 2026-01-29 17:15:31.903969 +0000 UTC m=+0.073117433 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:15:31 np0005601226 podman[245575]: 2026-01-29 17:15:31.904001181 +0000 UTC m=+0.073249187 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 29 12:15:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 853 B/s wr, 31 op/s
Jan 29 12:15:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 852 B/s wr, 15 op/s
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.624875) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706936624908, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 454, "num_deletes": 250, "total_data_size": 400854, "memory_usage": 410080, "flush_reason": "Manual Compaction"}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706936634286, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 300977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18760, "largest_seqno": 19213, "table_properties": {"data_size": 298528, "index_size": 549, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6449, "raw_average_key_size": 19, "raw_value_size": 293577, "raw_average_value_size": 892, "num_data_blocks": 25, "num_entries": 329, "num_filter_entries": 329, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769706912, "oldest_key_time": 1769706912, "file_creation_time": 1769706936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 9453 microseconds, and 1336 cpu microseconds.
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.634326) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 300977 bytes OK
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.634343) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.738324) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.738382) EVENT_LOG_v1 {"time_micros": 1769706936738371, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.738410) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 398098, prev total WAL file size 398098, number of live WAL files 2.
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.738938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(293KB)], [41(9839KB)]
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706936738973, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10377096, "oldest_snapshot_seqno": -1}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4445 keys, 7100085 bytes, temperature: kUnknown
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706936822907, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 7100085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7070158, "index_size": 17721, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 108921, "raw_average_key_size": 24, "raw_value_size": 6989550, "raw_average_value_size": 1572, "num_data_blocks": 738, "num_entries": 4445, "num_filter_entries": 4445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769706936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.823136) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 7100085 bytes
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.847004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.5 rd, 84.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 9.6 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(58.1) write-amplify(23.6) OK, records in: 4945, records dropped: 500 output_compression: NoCompression
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.847043) EVENT_LOG_v1 {"time_micros": 1769706936847028, "job": 20, "event": "compaction_finished", "compaction_time_micros": 84034, "compaction_time_cpu_micros": 15924, "output_level": 6, "num_output_files": 1, "total_output_size": 7100085, "num_input_records": 4945, "num_output_records": 4445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706936847306, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769706936848308, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.738852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.848431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.848438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.848439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.848441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:15:36.848442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:15:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 29 12:15:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 29 12:15:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 29 12:15:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 17 op/s
Jan 29 12:15:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 818 B/s wr, 10 op/s
Jan 29 12:15:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 29 12:15:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:15:40.277 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:15:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:15:40.277 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:15:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:15:40.277 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:15:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 29 12:15:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:15:40
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'backups', 'volumes', 'images', 'vms']
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:15:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:15:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 1023 B/s wr, 11 op/s
Jan 29 12:15:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394722214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394722214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Jan 29 12:15:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.0 KiB/s wr, 42 op/s
Jan 29 12:15:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 29 12:15:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 29 12:15:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 29 12:15:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 33 op/s
Jan 29 12:15:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1003 B/s wr, 29 op/s
Jan 29 12:15:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1965372966' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1965372966' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.2853387781843554e-06 of space, bias 1.0, pg target 0.0009856016334553067 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659266740198192 of space, bias 1.0, pg target 0.19977800220594574 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.255233376148957e-06 of space, bias 4.0, pg target 0.0015062800513787483 quantized to 16 (current 16)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:15:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 921 B/s wr, 27 op/s
Jan 29 12:15:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.451 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.451 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.470 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:15:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:15:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/775920117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:15:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:15:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/775920117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.569 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.569 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.577 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.577 239460 INFO nova.compute.claims [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:15:52 np0005601226 nova_compute[239456]: 2026-01-29 17:15:52.665 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:15:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:15:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4017884702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.181 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.186 239460 DEBUG nova.compute.provider_tree [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.201 239460 DEBUG nova.scheduler.client.report [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.229 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.230 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.278 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.278 239460 DEBUG nova.network.neutron [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.301 239460 INFO nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.320 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.404 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.405 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.406 239460 INFO nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Creating image(s)#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.422 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.439 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.456 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.458 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.459 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:15:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 716 B/s wr, 17 op/s
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.887 239460 DEBUG nova.virt.libvirt.imagebackend [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Image locations are: [{'url': 'rbd://cc5c72e3-31e0-58b9-8731-456117d38f4a/images/71879218-5462-43bb-aba6-6319695b24fd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cc5c72e3-31e0-58b9-8731-456117d38f4a/images/71879218-5462-43bb-aba6-6319695b24fd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.901 239460 WARNING oslo_policy.policy [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.901 239460 WARNING oslo_policy.policy [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 29 12:15:53 np0005601226 nova_compute[239456]: 2026-01-29 17:15:53.904 239460 DEBUG nova.policy [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bfd4570e2b9e47b5b967bd52324ea676', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '431c31cb9de042e6bc53b16a4b0a84d6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:15:54 np0005601226 nova_compute[239456]: 2026-01-29 17:15:54.726 239460 DEBUG nova.network.neutron [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Successfully created port: 3aa0a884-9877-40be-9e0e-295faf527bc3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:15:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 716 B/s wr, 17 op/s
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.049 239460 DEBUG nova.network.neutron [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Successfully updated port: 3aa0a884-9877-40be-9e0e-295faf527bc3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.068 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "refresh_cache-eaf7feb2-074e-4420-b260-76ed2274d174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.069 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquired lock "refresh_cache-eaf7feb2-074e-4420-b260-76ed2274d174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.069 239460 DEBUG nova.network.neutron [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.414 239460 DEBUG nova.network.neutron [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.463 239460 DEBUG nova.compute.manager [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received event network-changed-3aa0a884-9877-40be-9e0e-295faf527bc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.463 239460 DEBUG nova.compute.manager [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Refreshing instance network info cache due to event network-changed-3aa0a884-9877-40be-9e0e-295faf527bc3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.464 239460 DEBUG oslo_concurrency.lockutils [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-eaf7feb2-074e-4420-b260-76ed2274d174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.491 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.578 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.part --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.579 239460 DEBUG nova.virt.images [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] 71879218-5462-43bb-aba6-6319695b24fd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.593 239460 DEBUG nova.privsep.utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 29 12:15:56 np0005601226 nova_compute[239456]: 2026-01-29 17:15:56.594 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.part /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:15:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.460 239460 DEBUG nova.network.neutron [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Updating instance_info_cache with network_info: [{"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.481 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Releasing lock "refresh_cache-eaf7feb2-074e-4420-b260-76ed2274d174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.481 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Instance network_info: |[{"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.482 239460 DEBUG oslo_concurrency.lockutils [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-eaf7feb2-074e-4420-b260-76ed2274d174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.482 239460 DEBUG nova.network.neutron [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Refreshing network info cache for port 3aa0a884-9877-40be-9e0e-295faf527bc3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:15:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 656 B/s wr, 15 op/s
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.735 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.part /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.converted" returned: 0 in 1.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.738 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.788 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d.converted --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.789 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.808 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:15:57 np0005601226 nova_compute[239456]: 2026-01-29 17:15:57.811 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d eaf7feb2-074e-4420-b260-76ed2274d174_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:15:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 29 12:15:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 29 12:15:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.101 239460 DEBUG nova.network.neutron [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Updated VIF entry in instance network info cache for port 3aa0a884-9877-40be-9e0e-295faf527bc3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.102 239460 DEBUG nova.network.neutron [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Updating instance_info_cache with network_info: [{"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.121 239460 DEBUG oslo_concurrency.lockutils [req-143611ad-d2a2-4829-bbf1-ed61cc716ff6 req-130215ef-873e-4770-9533-039b720390d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-eaf7feb2-074e-4420-b260-76ed2274d174" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:15:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 29 12:15:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 29 12:15:59 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 29 12:15:59 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 29 12:15:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1023 B/s wr, 31 op/s
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.811 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d eaf7feb2-074e-4420-b260-76ed2274d174_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.001s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.859 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] resizing rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.933 239460 DEBUG nova.objects.instance [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lazy-loading 'migration_context' on Instance uuid eaf7feb2-074e-4420-b260-76ed2274d174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.948 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.949 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Ensure instance console log exists: /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.949 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.950 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.950 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.953 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Start _get_guest_xml network_info=[{"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.957 239460 WARNING nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.961 239460 DEBUG nova.virt.libvirt.host [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.962 239460 DEBUG nova.virt.libvirt.host [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.965 239460 DEBUG nova.virt.libvirt.host [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.966 239460 DEBUG nova.virt.libvirt.host [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.967 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.967 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.968 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.968 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.968 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.968 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.969 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.969 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.969 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.970 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.970 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.970 239460 DEBUG nova.virt.hardware [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.974 239460 DEBUG nova.privsep.utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 29 12:15:59 np0005601226 nova_compute[239456]: 2026-01-29 17:15:59.975 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:16:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141904612' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:16:00 np0005601226 nova_compute[239456]: 2026-01-29 17:16:00.476 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:00 np0005601226 nova_compute[239456]: 2026-01-29 17:16:00.494 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:16:00 np0005601226 nova_compute[239456]: 2026-01-29 17:16:00.498 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:16:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211090080' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.056 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.058 239460 DEBUG nova.virt.libvirt.vif [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-905439640',display_name='tempest-VolumesActionsTest-instance-905439640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-905439640',id=1,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='431c31cb9de042e6bc53b16a4b0a84d6',ramdisk_id='',reservation_id='r-zb47n151',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1767480583',owner_user_name='tempest-VolumesActionsTest-1767480583-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:15:53Z,user_data=None,user_id='bfd4570e2b9e47b5b967bd52324ea676',uuid=eaf7feb2-074e-4420-b260-76ed2274d174,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.059 239460 DEBUG nova.network.os_vif_util [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Converting VIF {"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.060 239460 DEBUG nova.network.os_vif_util [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:77,bridge_name='br-int',has_traffic_filtering=True,id=3aa0a884-9877-40be-9e0e-295faf527bc3,network=Network(7bc173dd-5a11-45d7-bb3b-a3cabef29b05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa0a884-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.062 239460 DEBUG nova.objects.instance [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lazy-loading 'pci_devices' on Instance uuid eaf7feb2-074e-4420-b260-76ed2274d174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.083 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <uuid>eaf7feb2-074e-4420-b260-76ed2274d174</uuid>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <name>instance-00000001</name>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesActionsTest-instance-905439640</nova:name>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:15:59</nova:creationTime>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:user uuid="bfd4570e2b9e47b5b967bd52324ea676">tempest-VolumesActionsTest-1767480583-project-member</nova:user>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:project uuid="431c31cb9de042e6bc53b16a4b0a84d6">tempest-VolumesActionsTest-1767480583</nova:project>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <nova:port uuid="3aa0a884-9877-40be-9e0e-295faf527bc3">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <entry name="serial">eaf7feb2-074e-4420-b260-76ed2274d174</entry>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <entry name="uuid">eaf7feb2-074e-4420-b260-76ed2274d174</entry>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/eaf7feb2-074e-4420-b260-76ed2274d174_disk">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/eaf7feb2-074e-4420-b260-76ed2274d174_disk.config">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:cd:e5:77"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <target dev="tap3aa0a884-98"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/console.log" append="off"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:16:01 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:16:01 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:16:01 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:16:01 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.084 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Preparing to wait for external event network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.084 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.084 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.085 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.085 239460 DEBUG nova.virt.libvirt.vif [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-905439640',display_name='tempest-VolumesActionsTest-instance-905439640',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-905439640',id=1,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='431c31cb9de042e6bc53b16a4b0a84d6',ramdisk_id='',reservation_id='r-zb47n151',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1767480583',owner_user_name='tempest-VolumesActionsTest-1767480583-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:15:53Z,user_data=None,user_id='bfd4570e2b9e47b5b967bd52324ea676',uuid=eaf7feb2-074e-4420-b260-76ed2274d174,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.086 239460 DEBUG nova.network.os_vif_util [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Converting VIF {"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.086 239460 DEBUG nova.network.os_vif_util [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:77,bridge_name='br-int',has_traffic_filtering=True,id=3aa0a884-9877-40be-9e0e-295faf527bc3,network=Network(7bc173dd-5a11-45d7-bb3b-a3cabef29b05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa0a884-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.087 239460 DEBUG os_vif [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:77,bridge_name='br-int',has_traffic_filtering=True,id=3aa0a884-9877-40be-9e0e-295faf527bc3,network=Network(7bc173dd-5a11-45d7-bb3b-a3cabef29b05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa0a884-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.121 239460 DEBUG ovsdbapp.backend.ovs_idl [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.121 239460 DEBUG ovsdbapp.backend.ovs_idl [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.122 239460 DEBUG ovsdbapp.backend.ovs_idl [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.122 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.123 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.123 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.124 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.125 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.127 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.137 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.137 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.138 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.139 239460 INFO oslo.privsep.daemon [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp7etz4ipd/privsep.sock']#033[00m
Jan 29 12:16:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 41 MiB data, 179 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 511 B/s wr, 11 op/s
Jan 29 12:16:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.725 239460 INFO oslo.privsep.daemon [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.616 245888 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.619 245888 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.621 245888 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 29 12:16:01 np0005601226 nova_compute[239456]: 2026-01-29 17:16:01.621 245888 INFO oslo.privsep.daemon [-] privsep daemon running as pid 245888#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.208 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.208 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3aa0a884-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.209 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3aa0a884-98, col_values=(('external_ids', {'iface-id': '3aa0a884-9877-40be-9e0e-295faf527bc3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:e5:77', 'vm-uuid': 'eaf7feb2-074e-4420-b260-76ed2274d174'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.210 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:02 np0005601226 NetworkManager[49020]: <info>  [1769706962.2117] manager: (tap3aa0a884-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.213 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.215 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.216 239460 INFO os_vif [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:77,bridge_name='br-int',has_traffic_filtering=True,id=3aa0a884-9877-40be-9e0e-295faf527bc3,network=Network(7bc173dd-5a11-45d7-bb3b-a3cabef29b05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa0a884-98')#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.335 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.336 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.336 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] No VIF found with MAC fa:16:3e:cd:e5:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.336 239460 INFO nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Using config drive#033[00m
Jan 29 12:16:02 np0005601226 nova_compute[239456]: 2026-01-29 17:16:02.353 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:16:02 np0005601226 podman[245912]: 2026-01-29 17:16:02.867934473 +0000 UTC m=+0.038909841 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 12:16:02 np0005601226 podman[245913]: 2026-01-29 17:16:02.931888266 +0000 UTC m=+0.100721877 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:16:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 73 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:16:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:16:03 np0005601226 podman[246099]: 2026-01-29 17:16:03.913108911 +0000 UTC m=+0.046346774 container create 4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ishizaka, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:16:03 np0005601226 systemd[1]: Started libpod-conmon-4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b.scope.
Jan 29 12:16:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:16:03 np0005601226 podman[246099]: 2026-01-29 17:16:03.88441433 +0000 UTC m=+0.017652213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:16:03 np0005601226 podman[246099]: 2026-01-29 17:16:03.989396411 +0000 UTC m=+0.122634294 container init 4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:16:03 np0005601226 podman[246099]: 2026-01-29 17:16:03.994957443 +0000 UTC m=+0.128195306 container start 4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ishizaka, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:16:03 np0005601226 magical_ishizaka[246116]: 167 167
Jan 29 12:16:03 np0005601226 systemd[1]: libpod-4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b.scope: Deactivated successfully.
Jan 29 12:16:04 np0005601226 conmon[246116]: conmon 4e3ef525a91a5e0c2bda <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b.scope/container/memory.events
Jan 29 12:16:04 np0005601226 podman[246099]: 2026-01-29 17:16:04.000686949 +0000 UTC m=+0.133924902 container attach 4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:16:04 np0005601226 podman[246099]: 2026-01-29 17:16:04.001406888 +0000 UTC m=+0.134644781 container died 4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:16:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-40e29abf5c319139dddb0c9253382e0479a1384e0c199e33bcd009f381af2fb7-merged.mount: Deactivated successfully.
Jan 29 12:16:04 np0005601226 podman[246099]: 2026-01-29 17:16:04.060086068 +0000 UTC m=+0.193323931 container remove 4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:16:04 np0005601226 systemd[1]: libpod-conmon-4e3ef525a91a5e0c2bda4087f4bb25d52c97e0f571f48a5aaa86e9590bc4857b.scope: Deactivated successfully.
Jan 29 12:16:04 np0005601226 podman[246139]: 2026-01-29 17:16:04.182351041 +0000 UTC m=+0.040735942 container create af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_archimedes, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:16:04 np0005601226 systemd[1]: Started libpod-conmon-af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed.scope.
Jan 29 12:16:04 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:16:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca75ce15a5765644937a5b48128d535146833693c99f37b20809c8dacc80526d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca75ce15a5765644937a5b48128d535146833693c99f37b20809c8dacc80526d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca75ce15a5765644937a5b48128d535146833693c99f37b20809c8dacc80526d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca75ce15a5765644937a5b48128d535146833693c99f37b20809c8dacc80526d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca75ce15a5765644937a5b48128d535146833693c99f37b20809c8dacc80526d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.251 239460 INFO nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Creating config drive at /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/disk.config#033[00m
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.256 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmrxd8f_p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:04 np0005601226 podman[246139]: 2026-01-29 17:16:04.162125729 +0000 UTC m=+0.020510650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:16:04 np0005601226 podman[246139]: 2026-01-29 17:16:04.268499479 +0000 UTC m=+0.126884380 container init af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 12:16:04 np0005601226 podman[246139]: 2026-01-29 17:16:04.275273223 +0000 UTC m=+0.133658124 container start af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_archimedes, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:16:04 np0005601226 podman[246139]: 2026-01-29 17:16:04.287581579 +0000 UTC m=+0.145966500 container attach af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.375 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmrxd8f_p" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.395 239460 DEBUG nova.storage.rbd_utils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] rbd image eaf7feb2-074e-4420-b260-76ed2274d174_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.399 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/disk.config eaf7feb2-074e-4420-b260-76ed2274d174_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.506 239460 DEBUG oslo_concurrency.processutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/disk.config eaf7feb2-074e-4420-b260-76ed2274d174_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.508 239460 INFO nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Deleting local config drive /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174/disk.config because it was imported into RBD.#033[00m
Jan 29 12:16:04 np0005601226 systemd[1]: Starting libvirt secret daemon...
Jan 29 12:16:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:16:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:16:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:16:04 np0005601226 systemd[1]: Started libvirt secret daemon.
Jan 29 12:16:04 np0005601226 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 29 12:16:04 np0005601226 kernel: tap3aa0a884-98: entered promiscuous mode
Jan 29 12:16:04 np0005601226 NetworkManager[49020]: <info>  [1769706964.6010] manager: (tap3aa0a884-98): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Jan 29 12:16:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:04Z|00027|binding|INFO|Claiming lport 3aa0a884-9877-40be-9e0e-295faf527bc3 for this chassis.
Jan 29 12:16:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:04Z|00028|binding|INFO|3aa0a884-9877-40be-9e0e-295faf527bc3: Claiming fa:16:3e:cd:e5:77 10.100.0.9
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.603 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.605 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:04.617 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:e5:77 10.100.0.9'], port_security=['fa:16:3e:cd:e5:77 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'eaf7feb2-074e-4420-b260-76ed2274d174', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '431c31cb9de042e6bc53b16a4b0a84d6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '702c6dd2-5551-48db-acf5-3e72982f8852', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=040668ef-47e9-4391-8410-1ab4265110b7, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3aa0a884-9877-40be-9e0e-295faf527bc3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:16:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:04.618 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3aa0a884-9877-40be-9e0e-295faf527bc3 in datapath 7bc173dd-5a11-45d7-bb3b-a3cabef29b05 bound to our chassis#033[00m
Jan 29 12:16:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:04.623 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bc173dd-5a11-45d7-bb3b-a3cabef29b05#033[00m
Jan 29 12:16:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:04.624 155625 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp1sx5vi96/privsep.sock']#033[00m
Jan 29 12:16:04 np0005601226 systemd-udevd[246251]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:16:04 np0005601226 NetworkManager[49020]: <info>  [1769706964.6419] device (tap3aa0a884-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.640 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:04 np0005601226 NetworkManager[49020]: <info>  [1769706964.6425] device (tap3aa0a884-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:16:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:04Z|00029|binding|INFO|Setting lport 3aa0a884-9877-40be-9e0e-295faf527bc3 ovn-installed in OVS
Jan 29 12:16:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:04Z|00030|binding|INFO|Setting lport 3aa0a884-9877-40be-9e0e-295faf527bc3 up in Southbound
Jan 29 12:16:04 np0005601226 awesome_archimedes[246155]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:16:04 np0005601226 awesome_archimedes[246155]: --> All data devices are unavailable
Jan 29 12:16:04 np0005601226 systemd-machined[207561]: New machine qemu-1-instance-00000001.
Jan 29 12:16:04 np0005601226 nova_compute[239456]: 2026-01-29 17:16:04.646 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:04 np0005601226 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 29 12:16:04 np0005601226 systemd[1]: libpod-af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed.scope: Deactivated successfully.
Jan 29 12:16:04 np0005601226 podman[246139]: 2026-01-29 17:16:04.67322193 +0000 UTC m=+0.531606831 container died af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_archimedes, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:16:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ca75ce15a5765644937a5b48128d535146833693c99f37b20809c8dacc80526d-merged.mount: Deactivated successfully.
Jan 29 12:16:04 np0005601226 podman[246139]: 2026-01-29 17:16:04.724006834 +0000 UTC m=+0.582391725 container remove af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_archimedes, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:16:04 np0005601226 systemd[1]: libpod-conmon-af85be577b0f4435af672b393c0f1b50316a9ffbd219d9f90a0fc02b4c3b27ed.scope: Deactivated successfully.
Jan 29 12:16:05 np0005601226 podman[246343]: 2026-01-29 17:16:05.172780887 +0000 UTC m=+0.049878301 container create a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:16:05 np0005601226 systemd[1]: Started libpod-conmon-a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3.scope.
Jan 29 12:16:05 np0005601226 podman[246343]: 2026-01-29 17:16:05.147341384 +0000 UTC m=+0.024438788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:16:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:16:05 np0005601226 podman[246343]: 2026-01-29 17:16:05.258672818 +0000 UTC m=+0.135770232 container init a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:16:05 np0005601226 podman[246343]: 2026-01-29 17:16:05.265427912 +0000 UTC m=+0.142525316 container start a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_panini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:16:05 np0005601226 friendly_panini[246360]: 167 167
Jan 29 12:16:05 np0005601226 systemd[1]: libpod-a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3.scope: Deactivated successfully.
Jan 29 12:16:05 np0005601226 conmon[246360]: conmon a9044455b6e11e8f62d0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3.scope/container/memory.events
Jan 29 12:16:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:05.272 155625 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 29 12:16:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:05.273 155625 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1sx5vi96/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 29 12:16:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:05.135 246354 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 29 12:16:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:05.137 246354 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 29 12:16:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:05.139 246354 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 29 12:16:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:05.140 246354 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246354#033[00m
Jan 29 12:16:05 np0005601226 podman[246343]: 2026-01-29 17:16:05.275114586 +0000 UTC m=+0.152212010 container attach a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 12:16:05 np0005601226 podman[246343]: 2026-01-29 17:16:05.275608269 +0000 UTC m=+0.152705673 container died a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_panini, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:16:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:05.275 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ff323ecc-5db2-4b2a-80b1-b7c051e3b78e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:05 np0005601226 systemd[1]: var-lib-containers-storage-overlay-587cbceb2485bb9d27ca90b5c739d468eab9e0bc7e5f47937b658c90ef6e84b7-merged.mount: Deactivated successfully.
Jan 29 12:16:05 np0005601226 podman[246343]: 2026-01-29 17:16:05.34200425 +0000 UTC m=+0.219101654 container remove a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 12:16:05 np0005601226 systemd[1]: libpod-conmon-a9044455b6e11e8f62d0e23e9473aaaf3917d5df1017a5a63293e042e69899c3.scope: Deactivated successfully.
Jan 29 12:16:05 np0005601226 podman[246385]: 2026-01-29 17:16:05.456724617 +0000 UTC m=+0.018000752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:16:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Jan 29 12:16:05 np0005601226 nova_compute[239456]: 2026-01-29 17:16:05.637 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:05 np0005601226 podman[246385]: 2026-01-29 17:16:05.771762414 +0000 UTC m=+0.333038529 container create c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:16:05 np0005601226 nova_compute[239456]: 2026-01-29 17:16:05.771 239460 DEBUG nova.compute.manager [req-aca5a7fb-68ba-40e0-8d1c-59b47b196a3f req-02785d0e-44f3-4c5f-bac5-bab995b2282e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received event network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:16:05 np0005601226 nova_compute[239456]: 2026-01-29 17:16:05.772 239460 DEBUG oslo_concurrency.lockutils [req-aca5a7fb-68ba-40e0-8d1c-59b47b196a3f req-02785d0e-44f3-4c5f-bac5-bab995b2282e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:05 np0005601226 nova_compute[239456]: 2026-01-29 17:16:05.772 239460 DEBUG oslo_concurrency.lockutils [req-aca5a7fb-68ba-40e0-8d1c-59b47b196a3f req-02785d0e-44f3-4c5f-bac5-bab995b2282e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:05 np0005601226 nova_compute[239456]: 2026-01-29 17:16:05.773 239460 DEBUG oslo_concurrency.lockutils [req-aca5a7fb-68ba-40e0-8d1c-59b47b196a3f req-02785d0e-44f3-4c5f-bac5-bab995b2282e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:05 np0005601226 nova_compute[239456]: 2026-01-29 17:16:05.773 239460 DEBUG nova.compute.manager [req-aca5a7fb-68ba-40e0-8d1c-59b47b196a3f req-02785d0e-44f3-4c5f-bac5-bab995b2282e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Processing event network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:16:06 np0005601226 systemd[1]: Started libpod-conmon-c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10.scope.
Jan 29 12:16:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:16:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b6e07c8a9d459380601bfe592b51001cdbe19ad5cf8364bac02b59645036ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b6e07c8a9d459380601bfe592b51001cdbe19ad5cf8364bac02b59645036ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b6e07c8a9d459380601bfe592b51001cdbe19ad5cf8364bac02b59645036ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69b6e07c8a9d459380601bfe592b51001cdbe19ad5cf8364bac02b59645036ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:06 np0005601226 podman[246385]: 2026-01-29 17:16:06.13523889 +0000 UTC m=+0.696515035 container init c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 12:16:06 np0005601226 podman[246385]: 2026-01-29 17:16:06.141720577 +0000 UTC m=+0.702996692 container start c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:16:06 np0005601226 podman[246385]: 2026-01-29 17:16:06.158000731 +0000 UTC m=+0.719276846 container attach c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:16:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:06.158 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.159 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.211 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769706966.2105632, eaf7feb2-074e-4420-b260-76ed2274d174 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.211 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] VM Started (Lifecycle Event)#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.213 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.216 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.227 239460 INFO nova.virt.libvirt.driver [-] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Instance spawned successfully.#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.228 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.247 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.252 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.255 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.255 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.256 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.256 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.256 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.257 239460 DEBUG nova.virt.libvirt.driver [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.284 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.285 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769706966.2115233, eaf7feb2-074e-4420-b260-76ed2274d174 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.285 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.309 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.312 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769706966.2159386, eaf7feb2-074e-4420-b260-76ed2274d174 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.313 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.331 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.337 239460 INFO nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Took 12.93 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.339 239460 DEBUG nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.340 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.373 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:16:06 np0005601226 kind_feynman[246428]: {
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:    "0": [
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:        {
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "devices": [
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "/dev/loop3"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            ],
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_name": "ceph_lv0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_size": "21470642176",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "name": "ceph_lv0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "tags": {
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cluster_name": "ceph",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.crush_device_class": "",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.encrypted": "0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.objectstore": "bluestore",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osd_id": "0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.type": "block",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.vdo": "0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.with_tpm": "0"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            },
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "type": "block",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "vg_name": "ceph_vg0"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:        }
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:    ],
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:    "1": [
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:        {
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "devices": [
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "/dev/loop4"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            ],
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_name": "ceph_lv1",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_size": "21470642176",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "name": "ceph_lv1",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "tags": {
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cluster_name": "ceph",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.crush_device_class": "",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.encrypted": "0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.objectstore": "bluestore",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osd_id": "1",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.type": "block",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.vdo": "0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.with_tpm": "0"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            },
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "type": "block",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "vg_name": "ceph_vg1"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:        }
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:    ],
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:    "2": [
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:        {
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "devices": [
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "/dev/loop5"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            ],
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_name": "ceph_lv2",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_size": "21470642176",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "name": "ceph_lv2",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "tags": {
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.cluster_name": "ceph",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.crush_device_class": "",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.encrypted": "0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.objectstore": "bluestore",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osd_id": "2",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.type": "block",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.vdo": "0",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:                "ceph.with_tpm": "0"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            },
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "type": "block",
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:            "vg_name": "ceph_vg2"
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:        }
Jan 29 12:16:06 np0005601226 kind_feynman[246428]:    ]
Jan 29 12:16:06 np0005601226 kind_feynman[246428]: }
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.409 239460 INFO nova.compute.manager [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Took 13.87 seconds to build instance.#033[00m
Jan 29 12:16:06 np0005601226 systemd[1]: libpod-c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10.scope: Deactivated successfully.
Jan 29 12:16:06 np0005601226 podman[246385]: 2026-01-29 17:16:06.413583618 +0000 UTC m=+0.974859753 container died c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:16:06 np0005601226 nova_compute[239456]: 2026-01-29 17:16:06.426 239460 DEBUG oslo_concurrency.lockutils [None req-7f689ae7-a607-4dc3-9155-ac2301d117cc bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:06 np0005601226 systemd[1]: var-lib-containers-storage-overlay-69b6e07c8a9d459380601bfe592b51001cdbe19ad5cf8364bac02b59645036ce-merged.mount: Deactivated successfully.
Jan 29 12:16:06 np0005601226 podman[246385]: 2026-01-29 17:16:06.472145674 +0000 UTC m=+1.033421789 container remove c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_feynman, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 12:16:06 np0005601226 systemd[1]: libpod-conmon-c0378ca44a3c9a57cd2dc68b096b6f34d0d242a41b86ef7a90d9b27ac1442f10.scope: Deactivated successfully.
Jan 29 12:16:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:06.610 246354 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:06.611 246354 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:06.611 246354 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 29 12:16:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 29 12:16:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 29 12:16:06 np0005601226 podman[246527]: 2026-01-29 17:16:06.883329102 +0000 UTC m=+0.042676235 container create 3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_germain, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 12:16:06 np0005601226 systemd[1]: Started libpod-conmon-3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed.scope.
Jan 29 12:16:06 np0005601226 podman[246527]: 2026-01-29 17:16:06.860619812 +0000 UTC m=+0.019966965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:16:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:16:06 np0005601226 podman[246527]: 2026-01-29 17:16:06.990692968 +0000 UTC m=+0.150040101 container init 3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:16:06 np0005601226 podman[246527]: 2026-01-29 17:16:06.9969995 +0000 UTC m=+0.156346633 container start 3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_germain, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:16:07 np0005601226 zealous_germain[246543]: 167 167
Jan 29 12:16:07 np0005601226 systemd[1]: libpod-3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed.scope: Deactivated successfully.
Jan 29 12:16:07 np0005601226 podman[246527]: 2026-01-29 17:16:07.014356283 +0000 UTC m=+0.173703416 container attach 3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_germain, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 12:16:07 np0005601226 podman[246527]: 2026-01-29 17:16:07.014880758 +0000 UTC m=+0.174227901 container died 3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_germain, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:16:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-39d567d637e613f76cfe43df862d99d7b0d3b60d244a430a6a90ac700f1b1162-merged.mount: Deactivated successfully.
Jan 29 12:16:07 np0005601226 podman[246527]: 2026-01-29 17:16:07.079590201 +0000 UTC m=+0.238937334 container remove 3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 12:16:07 np0005601226 systemd[1]: libpod-conmon-3de3d0616b1ed83327defabd52fd6066e6bab0f7bc776a3ceb30407fabdebfed.scope: Deactivated successfully.
Jan 29 12:16:07 np0005601226 nova_compute[239456]: 2026-01-29 17:16:07.212 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:07 np0005601226 podman[246566]: 2026-01-29 17:16:07.219610338 +0000 UTC m=+0.052151312 container create 2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_borg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:16:07 np0005601226 systemd[1]: Started libpod-conmon-2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5.scope.
Jan 29 12:16:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:16:07 np0005601226 podman[246566]: 2026-01-29 17:16:07.187976435 +0000 UTC m=+0.020517429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:16:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c45de872cb4c6cee408e0ab38c8582c926a8a0127ed23d47e7ef69c7880fe2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c45de872cb4c6cee408e0ab38c8582c926a8a0127ed23d47e7ef69c7880fe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c45de872cb4c6cee408e0ab38c8582c926a8a0127ed23d47e7ef69c7880fe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c45de872cb4c6cee408e0ab38c8582c926a8a0127ed23d47e7ef69c7880fe2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:07 np0005601226 podman[246566]: 2026-01-29 17:16:07.312696645 +0000 UTC m=+0.145237639 container init 2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:16:07 np0005601226 podman[246566]: 2026-01-29 17:16:07.317613069 +0000 UTC m=+0.150154043 container start 2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_borg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 12:16:07 np0005601226 podman[246566]: 2026-01-29 17:16:07.330691125 +0000 UTC m=+0.163232119 container attach 2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_borg, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.349 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[622a5eb0-46e8-4261-b55e-c79205a82123]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.352 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7bc173dd-51 in ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.354 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7bc173dd-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.354 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2d64e16c-8516-4302-be3a-bb9f07721632]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.359 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c0037806-7c30-45f5-88f9-1e52dfc0d4f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.381 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[a7720f08-d90f-4a09-96a7-3900522ccf15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.401 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[32ccbb7f-24d7-4780-bc1d-4ea3e92610c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.403 155625 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmptxu3jhl3/privsep.sock']#033[00m
Jan 29 12:16:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 40 op/s
Jan 29 12:16:07 np0005601226 nova_compute[239456]: 2026-01-29 17:16:07.920 239460 DEBUG nova.compute.manager [req-ea429946-5864-4049-81f4-4b668efe8542 req-3a81941d-a288-49bb-9570-ec356ff7394b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received event network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:16:07 np0005601226 nova_compute[239456]: 2026-01-29 17:16:07.921 239460 DEBUG oslo_concurrency.lockutils [req-ea429946-5864-4049-81f4-4b668efe8542 req-3a81941d-a288-49bb-9570-ec356ff7394b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:07 np0005601226 nova_compute[239456]: 2026-01-29 17:16:07.921 239460 DEBUG oslo_concurrency.lockutils [req-ea429946-5864-4049-81f4-4b668efe8542 req-3a81941d-a288-49bb-9570-ec356ff7394b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:07 np0005601226 nova_compute[239456]: 2026-01-29 17:16:07.921 239460 DEBUG oslo_concurrency.lockutils [req-ea429946-5864-4049-81f4-4b668efe8542 req-3a81941d-a288-49bb-9570-ec356ff7394b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:07 np0005601226 nova_compute[239456]: 2026-01-29 17:16:07.921 239460 DEBUG nova.compute.manager [req-ea429946-5864-4049-81f4-4b668efe8542 req-3a81941d-a288-49bb-9570-ec356ff7394b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] No waiting events found dispatching network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:16:07 np0005601226 nova_compute[239456]: 2026-01-29 17:16:07.922 239460 WARNING nova.compute.manager [req-ea429946-5864-4049-81f4-4b668efe8542 req-3a81941d-a288-49bb-9570-ec356ff7394b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received unexpected event network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:16:07 np0005601226 lvm[246673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:16:07 np0005601226 lvm[246673]: VG ceph_vg1 finished
Jan 29 12:16:08 np0005601226 lvm[246672]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:16:08 np0005601226 lvm[246672]: VG ceph_vg0 finished
Jan 29 12:16:08 np0005601226 lvm[246676]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:16:08 np0005601226 lvm[246676]: VG ceph_vg2 finished
Jan 29 12:16:08 np0005601226 wizardly_borg[246585]: {}
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:08.119 155625 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:08.121 155625 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptxu3jhl3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.984 246674 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.988 246674 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.992 246674 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:07.992 246674 INFO oslo.privsep.daemon [-] privsep daemon running as pid 246674#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:08.123 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[5474148a-211c-4860-badc-aee90c5c4857]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:08 np0005601226 systemd[1]: libpod-2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5.scope: Deactivated successfully.
Jan 29 12:16:08 np0005601226 systemd[1]: libpod-2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5.scope: Consumed 1.146s CPU time.
Jan 29 12:16:08 np0005601226 conmon[246585]: conmon 2e59f8656e129889f9af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5.scope/container/memory.events
Jan 29 12:16:08 np0005601226 podman[246566]: 2026-01-29 17:16:08.13687279 +0000 UTC m=+0.969413784 container died 2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:16:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-35c45de872cb4c6cee408e0ab38c8582c926a8a0127ed23d47e7ef69c7880fe2-merged.mount: Deactivated successfully.
Jan 29 12:16:08 np0005601226 podman[246566]: 2026-01-29 17:16:08.195297073 +0000 UTC m=+1.027838047 container remove 2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:16:08 np0005601226 systemd[1]: libpod-conmon-2e59f8656e129889f9af242c49b6e5b011b4e78f023ab4774d7d9d3f642552e5.scope: Deactivated successfully.
Jan 29 12:16:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:16:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:16:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:16:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:08.638 246674 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:08.639 246674 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:08.639 246674 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.182 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[b60bd587-8b31-40f5-b0b1-a14b78232a63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.199 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c91f7cb2-688a-44c0-bed5-9d17b943f7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 systemd-udevd[246671]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:16:09 np0005601226 NetworkManager[49020]: <info>  [1769706969.2010] manager: (tap7bc173dd-50): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.232 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea46fd1-2596-4433-bf66-98c9451ff004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.235 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[5006c347-e66c-4cb1-b0d3-0ec7da43f773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 NetworkManager[49020]: <info>  [1769706969.2493] device (tap7bc173dd-50): carrier: link connected
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.253 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[59899f38-209c-40a8-87d7-2d22fca64506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.265 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[129a7004-6f6b-4e39-ab9d-12e38d806107]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bc173dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432275, 'reachable_time': 22112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246743, 'error': None, 'target': 'ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.277 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[01a153c3-9d81-47f8-b255-767f78f2bef4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:e363'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 432275, 'tstamp': 432275}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246744, 'error': None, 'target': 'ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.290 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d92832e3-6d46-45a2-9dcb-8d285db02e5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bc173dd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:e3:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432275, 'reachable_time': 22112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 246745, 'error': None, 'target': 'ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:16:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.316 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d324da8e-a88b-4d23-9949-0f1f6740a368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.352 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7ca8f1-90d6-416a-89a4-17faef0a409d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.353 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bc173dd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.354 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.354 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bc173dd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:09 np0005601226 nova_compute[239456]: 2026-01-29 17:16:09.356 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:09 np0005601226 kernel: tap7bc173dd-50: entered promiscuous mode
Jan 29 12:16:09 np0005601226 NetworkManager[49020]: <info>  [1769706969.3568] manager: (tap7bc173dd-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.359 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bc173dd-50, col_values=(('external_ids', {'iface-id': '0229337d-946d-4a7e-b065-7d30bfb16658'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:09 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:09Z|00031|binding|INFO|Releasing lport 0229337d-946d-4a7e-b065-7d30bfb16658 from this chassis (sb_readonly=0)
Jan 29 12:16:09 np0005601226 nova_compute[239456]: 2026-01-29 17:16:09.360 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:09 np0005601226 nova_compute[239456]: 2026-01-29 17:16:09.361 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.361 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7bc173dd-5a11-45d7-bb3b-a3cabef29b05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7bc173dd-5a11-45d7-bb3b-a3cabef29b05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.362 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5c14fb85-7c6a-4a45-874c-f3eb84620796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.363 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-7bc173dd-5a11-45d7-bb3b-a3cabef29b05
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/7bc173dd-5a11-45d7-bb3b-a3cabef29b05.pid.haproxy
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 7bc173dd-5a11-45d7-bb3b-a3cabef29b05
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:16:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:09.364 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'env', 'PROCESS_TAG=haproxy-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7bc173dd-5a11-45d7-bb3b-a3cabef29b05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:16:09 np0005601226 nova_compute[239456]: 2026-01-29 17:16:09.366 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Jan 29 12:16:09 np0005601226 podman[246776]: 2026-01-29 17:16:09.641404069 +0000 UTC m=+0.016574352 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:16:09 np0005601226 podman[246776]: 2026-01-29 17:16:09.952682014 +0000 UTC m=+0.327852317 container create 58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 29 12:16:10 np0005601226 systemd[1]: Started libpod-conmon-58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb.scope.
Jan 29 12:16:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:16:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd356513286ae3533bccfaa8d5ce890fd550f9af8ce96a8ac74f9d921676de7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:16:10 np0005601226 podman[246776]: 2026-01-29 17:16:10.15066188 +0000 UTC m=+0.525832163 container init 58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:16:10 np0005601226 podman[246776]: 2026-01-29 17:16:10.155635966 +0000 UTC m=+0.530806219 container start 58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:16:10 np0005601226 neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05[246790]: [NOTICE]   (246794) : New worker (246796) forked
Jan 29 12:16:10 np0005601226 neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05[246790]: [NOTICE]   (246794) : Loading success.
Jan 29 12:16:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:10.262 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:16:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:16:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:16:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:16:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:16:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:16:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:16:10 np0005601226 nova_compute[239456]: 2026-01-29 17:16:10.638 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Jan 29 12:16:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904816962' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1904816962' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.215 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.466 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.466 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.468 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.468 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.468 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.469 239460 INFO nova.compute.manager [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Terminating instance#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.470 239460 DEBUG nova.compute.manager [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:16:12 np0005601226 kernel: tap3aa0a884-98 (unregistering): left promiscuous mode
Jan 29 12:16:12 np0005601226 NetworkManager[49020]: <info>  [1769706972.5934] device (tap3aa0a884-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.594 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:12Z|00032|binding|INFO|Releasing lport 3aa0a884-9877-40be-9e0e-295faf527bc3 from this chassis (sb_readonly=0)
Jan 29 12:16:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:12Z|00033|binding|INFO|Setting lport 3aa0a884-9877-40be-9e0e-295faf527bc3 down in Southbound
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.601 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:16:12Z|00034|binding|INFO|Removing iface tap3aa0a884-98 ovn-installed in OVS
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.607 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.607 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:e5:77 10.100.0.9'], port_security=['fa:16:3e:cd:e5:77 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'eaf7feb2-074e-4420-b260-76ed2274d174', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '431c31cb9de042e6bc53b16a4b0a84d6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '702c6dd2-5551-48db-acf5-3e72982f8852', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=040668ef-47e9-4391-8410-1ab4265110b7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3aa0a884-9877-40be-9e0e-295faf527bc3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.609 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3aa0a884-9877-40be-9e0e-295faf527bc3 in datapath 7bc173dd-5a11-45d7-bb3b-a3cabef29b05 unbound from our chassis#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.610 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7bc173dd-5a11-45d7-bb3b-a3cabef29b05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.611 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4cf93b-0aed-487e-adba-a1bd88851a95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.611 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05 namespace which is not needed anymore#033[00m
Jan 29 12:16:12 np0005601226 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 29 12:16:12 np0005601226 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 7.199s CPU time.
Jan 29 12:16:12 np0005601226 systemd-machined[207561]: Machine qemu-1-instance-00000001 terminated.
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.764 239460 DEBUG nova.compute.manager [req-40dc5452-4e51-4e71-b03d-be989f6e162f req-7013bda9-954b-4d45-9f62-6035eb05c8ad 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received event network-vif-unplugged-3aa0a884-9877-40be-9e0e-295faf527bc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.764 239460 DEBUG oslo_concurrency.lockutils [req-40dc5452-4e51-4e71-b03d-be989f6e162f req-7013bda9-954b-4d45-9f62-6035eb05c8ad 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.764 239460 DEBUG oslo_concurrency.lockutils [req-40dc5452-4e51-4e71-b03d-be989f6e162f req-7013bda9-954b-4d45-9f62-6035eb05c8ad 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.764 239460 DEBUG oslo_concurrency.lockutils [req-40dc5452-4e51-4e71-b03d-be989f6e162f req-7013bda9-954b-4d45-9f62-6035eb05c8ad 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.765 239460 DEBUG nova.compute.manager [req-40dc5452-4e51-4e71-b03d-be989f6e162f req-7013bda9-954b-4d45-9f62-6035eb05c8ad 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] No waiting events found dispatching network-vif-unplugged-3aa0a884-9877-40be-9e0e-295faf527bc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.765 239460 DEBUG nova.compute.manager [req-40dc5452-4e51-4e71-b03d-be989f6e162f req-7013bda9-954b-4d45-9f62-6035eb05c8ad 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received event network-vif-unplugged-3aa0a884-9877-40be-9e0e-295faf527bc3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:16:12 np0005601226 neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05[246790]: [NOTICE]   (246794) : haproxy version is 2.8.14-c23fe91
Jan 29 12:16:12 np0005601226 neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05[246790]: [NOTICE]   (246794) : path to executable is /usr/sbin/haproxy
Jan 29 12:16:12 np0005601226 neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05[246790]: [WARNING]  (246794) : Exiting Master process...
Jan 29 12:16:12 np0005601226 neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05[246790]: [ALERT]    (246794) : Current worker (246796) exited with code 143 (Terminated)
Jan 29 12:16:12 np0005601226 neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05[246790]: [WARNING]  (246794) : All workers exited. Exiting... (0)
Jan 29 12:16:12 np0005601226 systemd[1]: libpod-58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb.scope: Deactivated successfully.
Jan 29 12:16:12 np0005601226 podman[246827]: 2026-01-29 17:16:12.792302704 +0000 UTC m=+0.073568907 container died 58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:16:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb-userdata-shm.mount: Deactivated successfully.
Jan 29 12:16:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-acd356513286ae3533bccfaa8d5ce890fd550f9af8ce96a8ac74f9d921676de7-merged.mount: Deactivated successfully.
Jan 29 12:16:12 np0005601226 podman[246827]: 2026-01-29 17:16:12.886347407 +0000 UTC m=+0.167613600 container cleanup 58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 29 12:16:12 np0005601226 systemd[1]: libpod-conmon-58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb.scope: Deactivated successfully.
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.903 239460 INFO nova.virt.libvirt.driver [-] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Instance destroyed successfully.#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.905 239460 DEBUG nova.objects.instance [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lazy-loading 'resources' on Instance uuid eaf7feb2-074e-4420-b260-76ed2274d174 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.920 239460 DEBUG nova.virt.libvirt.vif [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:15:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-905439640',display_name='tempest-VolumesActionsTest-instance-905439640',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-905439640',id=1,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:16:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='431c31cb9de042e6bc53b16a4b0a84d6',ramdisk_id='',reservation_id='r-zb47n151',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1767480583',owner_user_name='tempest-VolumesActionsTest-1767480583-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:16:06Z,user_data=None,user_id='bfd4570e2b9e47b5b967bd52324ea676',uuid=eaf7feb2-074e-4420-b260-76ed2274d174,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.920 239460 DEBUG nova.network.os_vif_util [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Converting VIF {"id": "3aa0a884-9877-40be-9e0e-295faf527bc3", "address": "fa:16:3e:cd:e5:77", "network": {"id": "7bc173dd-5a11-45d7-bb3b-a3cabef29b05", "bridge": "br-int", "label": "tempest-VolumesActionsTest-1265997241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "431c31cb9de042e6bc53b16a4b0a84d6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3aa0a884-98", "ovs_interfaceid": "3aa0a884-9877-40be-9e0e-295faf527bc3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.921 239460 DEBUG nova.network.os_vif_util [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:77,bridge_name='br-int',has_traffic_filtering=True,id=3aa0a884-9877-40be-9e0e-295faf527bc3,network=Network(7bc173dd-5a11-45d7-bb3b-a3cabef29b05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa0a884-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.921 239460 DEBUG os_vif [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:77,bridge_name='br-int',has_traffic_filtering=True,id=3aa0a884-9877-40be-9e0e-295faf527bc3,network=Network(7bc173dd-5a11-45d7-bb3b-a3cabef29b05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa0a884-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.923 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.924 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3aa0a884-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.957 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.960 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.963 239460 INFO os_vif [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:e5:77,bridge_name='br-int',has_traffic_filtering=True,id=3aa0a884-9877-40be-9e0e-295faf527bc3,network=Network(7bc173dd-5a11-45d7-bb3b-a3cabef29b05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3aa0a884-98')#033[00m
Jan 29 12:16:12 np0005601226 podman[246864]: 2026-01-29 17:16:12.976392481 +0000 UTC m=+0.071379876 container remove 58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.980 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9d96900c-7de0-452e-bd4c-65776aec142c]: (4, ('Thu Jan 29 05:16:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05 (58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb)\n58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb\nThu Jan 29 05:16:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05 (58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb)\n58d54a3c8ac557fd0cd4e0186d01c2646403f4eb3188e758fe27c2e7062079cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.982 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[363c2382-af01-4997-96ec-de0aad8157d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.984 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bc173dd-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:12 np0005601226 kernel: tap7bc173dd-50: left promiscuous mode
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.989 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 nova_compute[239456]: 2026-01-29 17:16:12.992 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:12.997 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[897d7f70-86af-4530-98f8-1248136cffe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:13.009 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[09bba0bb-695f-4520-8a52-90b95c5ec5e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:13.011 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ab0366-8a7c-43cb-8b99-f67b2bdacb40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:13.027 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e70c9c-2ad7-4fe5-a053-0d4d4626ac79]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 432268, 'reachable_time': 27950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246901, 'error': None, 'target': 'ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:13 np0005601226 systemd[1]: run-netns-ovnmeta\x2d7bc173dd\x2d5a11\x2d45d7\x2dbb3b\x2da3cabef29b05.mount: Deactivated successfully.
Jan 29 12:16:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:13.043 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7bc173dd-5a11-45d7-bb3b-a3cabef29b05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:16:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:13.044 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5ac1ef-8295-42f1-83ed-c84b9ae4d005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:16:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 756 KiB/s wr, 121 op/s
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.600 239460 INFO nova.virt.libvirt.driver [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Deleting instance files /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174_del#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.600 239460 INFO nova.virt.libvirt.driver [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Deletion of /var/lib/nova/instances/eaf7feb2-074e-4420-b260-76ed2274d174_del complete#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.660 239460 DEBUG nova.virt.libvirt.host [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.661 239460 INFO nova.virt.libvirt.host [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] UEFI support detected#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.663 239460 INFO nova.compute.manager [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Took 2.19 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.664 239460 DEBUG oslo.service.loopingcall [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.664 239460 DEBUG nova.compute.manager [-] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.664 239460 DEBUG nova.network.neutron [-] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.872 239460 DEBUG nova.compute.manager [req-47ebd0d4-717b-4159-879a-b5775f394080 req-e968a9de-4ab4-4a29-a5c4-41f2cbd04403 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received event network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.873 239460 DEBUG oslo_concurrency.lockutils [req-47ebd0d4-717b-4159-879a-b5775f394080 req-e968a9de-4ab4-4a29-a5c4-41f2cbd04403 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.873 239460 DEBUG oslo_concurrency.lockutils [req-47ebd0d4-717b-4159-879a-b5775f394080 req-e968a9de-4ab4-4a29-a5c4-41f2cbd04403 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.873 239460 DEBUG oslo_concurrency.lockutils [req-47ebd0d4-717b-4159-879a-b5775f394080 req-e968a9de-4ab4-4a29-a5c4-41f2cbd04403 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.873 239460 DEBUG nova.compute.manager [req-47ebd0d4-717b-4159-879a-b5775f394080 req-e968a9de-4ab4-4a29-a5c4-41f2cbd04403 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] No waiting events found dispatching network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:16:14 np0005601226 nova_compute[239456]: 2026-01-29 17:16:14.874 239460 WARNING nova.compute.manager [req-47ebd0d4-717b-4159-879a-b5775f394080 req-e968a9de-4ab4-4a29-a5c4-41f2cbd04403 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received unexpected event network-vif-plugged-3aa0a884-9877-40be-9e0e-295faf527bc3 for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:16:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 73 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 118 op/s
Jan 29 12:16:15 np0005601226 nova_compute[239456]: 2026-01-29 17:16:15.641 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:15 np0005601226 nova_compute[239456]: 2026-01-29 17:16:15.665 239460 DEBUG nova.network.neutron [-] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:16:15 np0005601226 nova_compute[239456]: 2026-01-29 17:16:15.684 239460 INFO nova.compute.manager [-] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Took 1.02 seconds to deallocate network for instance.#033[00m
Jan 29 12:16:15 np0005601226 nova_compute[239456]: 2026-01-29 17:16:15.728 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:15 np0005601226 nova_compute[239456]: 2026-01-29 17:16:15.729 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:15 np0005601226 nova_compute[239456]: 2026-01-29 17:16:15.784 239460 DEBUG oslo_concurrency.processutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:16.264 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:16:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:16:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590757780' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.307 239460 DEBUG oslo_concurrency.processutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.315 239460 DEBUG nova.compute.provider_tree [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.366 239460 ERROR nova.scheduler.client.report [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] [req-ce982611-fd5d-46a0-a767-f9f620c89127] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 79259295-532c-4a51-8f50-027529735b0c.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-ce982611-fd5d-46a0-a767-f9f620c89127"}]}#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.398 239460 DEBUG nova.scheduler.client.report [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Refreshing inventories for resource provider 79259295-532c-4a51-8f50-027529735b0c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.412 239460 DEBUG nova.scheduler.client.report [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Updating ProviderTree inventory for provider 79259295-532c-4a51-8f50-027529735b0c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.412 239460 DEBUG nova.compute.provider_tree [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.425 239460 DEBUG nova.scheduler.client.report [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Refreshing aggregate associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.446 239460 DEBUG nova.scheduler.client.report [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Refreshing trait associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, traits: HW_CPU_X86_SSE4A,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.477 239460 DEBUG oslo_concurrency.processutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:16 np0005601226 nova_compute[239456]: 2026-01-29 17:16:16.974 239460 DEBUG nova.compute.manager [req-ab3f41c7-e6ec-4594-a8da-2860a77376f0 req-eb04d29e-d00c-4e98-ab77-33069d6ccc80 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Received event network-vif-deleted-3aa0a884-9877-40be-9e0e-295faf527bc3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:16:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:16:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254257324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.021 239460 DEBUG oslo_concurrency.processutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.026 239460 DEBUG nova.compute.provider_tree [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.076 239460 DEBUG nova.scheduler.client.report [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Updated inventory for provider 79259295-532c-4a51-8f50-027529735b0c with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.077 239460 DEBUG nova.compute.provider_tree [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Updating resource provider 79259295-532c-4a51-8f50-027529735b0c generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.077 239460 DEBUG nova.compute.provider_tree [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.099 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.125 239460 INFO nova.scheduler.client.report [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Deleted allocations for instance eaf7feb2-074e-4420-b260-76ed2274d174#033[00m
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.207 239460 DEBUG oslo_concurrency.lockutils [None req-412b4732-813a-462e-bedb-08ee9aead784 bfd4570e2b9e47b5b967bd52324ea676 431c31cb9de042e6bc53b16a4b0a84d6 - - default default] Lock "eaf7feb2-074e-4420-b260-76ed2274d174" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 73 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 108 op/s
Jan 29 12:16:17 np0005601226 nova_compute[239456]: 2026-01-29 17:16:17.958 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 114 op/s
Jan 29 12:16:20 np0005601226 nova_compute[239456]: 2026-01-29 17:16:20.642 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/756050115' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/756050115' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 29 12:16:21 np0005601226 nova_compute[239456]: 2026-01-29 17:16:21.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:21 np0005601226 nova_compute[239456]: 2026-01-29 17:16:21.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:21 np0005601226 nova_compute[239456]: 2026-01-29 17:16:21.635 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:21 np0005601226 nova_compute[239456]: 2026-01-29 17:16:21.635 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:21 np0005601226 nova_compute[239456]: 2026-01-29 17:16:21.635 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:21 np0005601226 nova_compute[239456]: 2026-01-29 17:16:21.635 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:16:21 np0005601226 nova_compute[239456]: 2026-01-29 17:16:21.636 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2472912805' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.166 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/181102139' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/181102139' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.326 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.327 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4825MB free_disk=59.988272646442056GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.328 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.328 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.388 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.389 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.406 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:16:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209486568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.932 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.938 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.953 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.963 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.975 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:16:22 np0005601226 nova_compute[239456]: 2026-01-29 17:16:22.975 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.4 KiB/s wr, 56 op/s
Jan 29 12:16:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1111655621' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1111655621' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:23 np0005601226 nova_compute[239456]: 2026-01-29 17:16:23.975 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:24 np0005601226 nova_compute[239456]: 2026-01-29 17:16:24.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:24 np0005601226 nova_compute[239456]: 2026-01-29 17:16:24.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:16:24 np0005601226 nova_compute[239456]: 2026-01-29 17:16:24.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:16:24 np0005601226 nova_compute[239456]: 2026-01-29 17:16:24.621 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:16:24 np0005601226 nova_compute[239456]: 2026-01-29 17:16:24.621 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:24 np0005601226 nova_compute[239456]: 2026-01-29 17:16:24.622 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.3 KiB/s wr, 68 op/s
Jan 29 12:16:25 np0005601226 nova_compute[239456]: 2026-01-29 17:16:25.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:25 np0005601226 nova_compute[239456]: 2026-01-29 17:16:25.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:16:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3136956609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3136956609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:25 np0005601226 nova_compute[239456]: 2026-01-29 17:16:25.645 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722539789' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1722539789' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:26 np0005601226 nova_compute[239456]: 2026-01-29 17:16:26.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Jan 29 12:16:27 np0005601226 nova_compute[239456]: 2026-01-29 17:16:27.901 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769706972.899546, eaf7feb2-074e-4420-b260-76ed2274d174 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:16:27 np0005601226 nova_compute[239456]: 2026-01-29 17:16:27.901 239460 INFO nova.compute.manager [-] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:16:27 np0005601226 nova_compute[239456]: 2026-01-29 17:16:27.921 239460 DEBUG nova.compute.manager [None req-8ca7c8cc-0414-45dc-b872-1795ab46e327 - - - - - -] [instance: eaf7feb2-074e-4420-b260-76ed2274d174] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:16:27 np0005601226 nova_compute[239456]: 2026-01-29 17:16:27.967 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.4 KiB/s wr, 88 op/s
Jan 29 12:16:29 np0005601226 nova_compute[239456]: 2026-01-29 17:16:29.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3682949052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3682949052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/831876004' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/831876004' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:30 np0005601226 nova_compute[239456]: 2026-01-29 17:16:30.647 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.1 KiB/s wr, 72 op/s
Jan 29 12:16:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:32 np0005601226 nova_compute[239456]: 2026-01-29 17:16:32.970 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.3 KiB/s wr, 79 op/s
Jan 29 12:16:33 np0005601226 podman[246994]: 2026-01-29 17:16:33.871030794 +0000 UTC m=+0.043151949 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 29 12:16:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2116956387' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2116956387' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:33 np0005601226 podman[246995]: 2026-01-29 17:16:33.902716287 +0000 UTC m=+0.072904849 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:16:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 78 op/s
Jan 29 12:16:35 np0005601226 nova_compute[239456]: 2026-01-29 17:16:35.648 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:36 np0005601226 nova_compute[239456]: 2026-01-29 17:16:36.896 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429889258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3429889258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.2 KiB/s wr, 53 op/s
Jan 29 12:16:37 np0005601226 nova_compute[239456]: 2026-01-29 17:16:37.973 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.9 KiB/s wr, 73 op/s
Jan 29 12:16:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:40.277 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:16:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:40.278 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:16:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:16:40.278 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:16:40
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'images']
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:16:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273745015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273745015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:40 np0005601226 nova_compute[239456]: 2026-01-29 17:16:40.649 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:16:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:16:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.6 KiB/s wr, 42 op/s
Jan 29 12:16:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:42 np0005601226 nova_compute[239456]: 2026-01-29 17:16:42.974 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 53 op/s
Jan 29 12:16:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Jan 29 12:16:45 np0005601226 nova_compute[239456]: 2026-01-29 17:16:45.652 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Jan 29 12:16:47 np0005601226 nova_compute[239456]: 2026-01-29 17:16:47.977 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2754053445' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2754053445' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Jan 29 12:16:50 np0005601226 nova_compute[239456]: 2026-01-29 17:16:50.654 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.377384667440947e-07 of space, bias 1.0, pg target 4.132154002322841e-05 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.7279431456784264e-06 of space, bias 1.0, pg target 0.0011183829437035279 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659284438783865 of space, bias 1.0, pg target 0.19977853316351596 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2324270407514056e-06 of space, bias 4.0, pg target 0.0014789124489016866 quantized to 16 (current 16)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:16:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1023 B/s wr, 21 op/s
Jan 29 12:16:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:52 np0005601226 nova_compute[239456]: 2026-01-29 17:16:52.981 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 35 op/s
Jan 29 12:16:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 24 op/s
Jan 29 12:16:55 np0005601226 nova_compute[239456]: 2026-01-29 17:16:55.657 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:56 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 29 12:16:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:16:56 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 29 12:16:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 938 B/s wr, 22 op/s
Jan 29 12:16:57 np0005601226 nova_compute[239456]: 2026-01-29 17:16:57.984 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:16:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:16:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787122704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:16:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:16:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787122704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:16:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 49 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 29 12:17:00 np0005601226 nova_compute[239456]: 2026-01-29 17:17:00.667 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 49 MiB data, 246 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Jan 29 12:17:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 29 12:17:03 np0005601226 nova_compute[239456]: 2026-01-29 17:17:03.032 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 29 12:17:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 29 12:17:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 29 12:17:04 np0005601226 podman[247042]: 2026-01-29 17:17:04.895857387 +0000 UTC m=+0.061092368 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 29 12:17:04 np0005601226 podman[247041]: 2026-01-29 17:17:04.898302384 +0000 UTC m=+0.065114058 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 29 12:17:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:17:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774711866' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:17:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:17:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 29 12:17:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774711866' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:17:05 np0005601226 nova_compute[239456]: 2026-01-29 17:17:05.668 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.648052) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707026648125, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1123, "num_deletes": 257, "total_data_size": 1528247, "memory_usage": 1561328, "flush_reason": "Manual Compaction"}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707026668103, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1505531, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19214, "largest_seqno": 20336, "table_properties": {"data_size": 1500158, "index_size": 2769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11856, "raw_average_key_size": 19, "raw_value_size": 1488974, "raw_average_value_size": 2428, "num_data_blocks": 125, "num_entries": 613, "num_filter_entries": 613, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769706937, "oldest_key_time": 1769706937, "file_creation_time": 1769707026, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 20120 microseconds, and 3199 cpu microseconds.
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.668178) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1505531 bytes OK
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.668227) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.676657) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.676692) EVENT_LOG_v1 {"time_micros": 1769707026676685, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.676714) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1522949, prev total WAL file size 1522949, number of live WAL files 2.
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.677453) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1470KB)], [44(6933KB)]
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707026677490, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8605616, "oldest_snapshot_seqno": -1}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4528 keys, 8472517 bytes, temperature: kUnknown
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707026787382, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8472517, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8440134, "index_size": 19985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111943, "raw_average_key_size": 24, "raw_value_size": 8356181, "raw_average_value_size": 1845, "num_data_blocks": 833, "num_entries": 4528, "num_filter_entries": 4528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707026, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.787577) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8472517 bytes
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.816076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.3 rd, 77.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 6.8 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 5058, records dropped: 530 output_compression: NoCompression
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.816114) EVENT_LOG_v1 {"time_micros": 1769707026816099, "job": 22, "event": "compaction_finished", "compaction_time_micros": 109866, "compaction_time_cpu_micros": 17022, "output_level": 6, "num_output_files": 1, "total_output_size": 8472517, "num_input_records": 5058, "num_output_records": 4528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707026816492, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707026817252, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.677338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.817292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.817297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.817298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.817300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:17:06.817302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:17:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1536100416' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:17:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 29 12:17:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 29 12:17:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 29 12:17:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 29 12:17:08 np0005601226 nova_compute[239456]: 2026-01-29 17:17:08.034 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:17:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:17:09 np0005601226 podman[247228]: 2026-01-29 17:17:09.265390038 +0000 UTC m=+0.021974381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:17:09 np0005601226 podman[247228]: 2026-01-29 17:17:09.374899955 +0000 UTC m=+0.131484258 container create 4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:17:09 np0005601226 systemd[1]: Started libpod-conmon-4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e.scope.
Jan 29 12:17:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.4 KiB/s wr, 41 op/s
Jan 29 12:17:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:17:09 np0005601226 podman[247228]: 2026-01-29 17:17:09.760934725 +0000 UTC m=+0.517519068 container init 4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 12:17:09 np0005601226 podman[247228]: 2026-01-29 17:17:09.768931063 +0000 UTC m=+0.525515366 container start 4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:17:09 np0005601226 quizzical_cannon[247245]: 167 167
Jan 29 12:17:09 np0005601226 systemd[1]: libpod-4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e.scope: Deactivated successfully.
Jan 29 12:17:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:17:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:17:09 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:17:09 np0005601226 podman[247228]: 2026-01-29 17:17:09.909422676 +0000 UTC m=+0.666007009 container attach 4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:17:09 np0005601226 podman[247228]: 2026-01-29 17:17:09.910144185 +0000 UTC m=+0.666728528 container died 4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:17:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c9ad30fcfed51aabcc274913d12c5840d087e089ea3b4406d00056a5f4f62ebc-merged.mount: Deactivated successfully.
Jan 29 12:17:10 np0005601226 podman[247228]: 2026-01-29 17:17:10.297755959 +0000 UTC m=+1.054340282 container remove 4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:17:10 np0005601226 systemd[1]: libpod-conmon-4bd941d936e047dbeaaecd9f98b03fa5b054492d323923dd0c90f6d30266e01e.scope: Deactivated successfully.
Jan 29 12:17:10 np0005601226 podman[247269]: 2026-01-29 17:17:10.406310429 +0000 UTC m=+0.022221956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:17:10 np0005601226 podman[247269]: 2026-01-29 17:17:10.553274887 +0000 UTC m=+0.169186394 container create 89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 12:17:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:17:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:17:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:17:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:17:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:17:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:17:10 np0005601226 nova_compute[239456]: 2026-01-29 17:17:10.708 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:11 np0005601226 systemd[1]: Started libpod-conmon-89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e.scope.
Jan 29 12:17:11 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:17:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91c3209c9123935972202945593e1ac7e1a33f497fe4ac4496afc9ac8e7a0e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91c3209c9123935972202945593e1ac7e1a33f497fe4ac4496afc9ac8e7a0e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91c3209c9123935972202945593e1ac7e1a33f497fe4ac4496afc9ac8e7a0e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91c3209c9123935972202945593e1ac7e1a33f497fe4ac4496afc9ac8e7a0e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a91c3209c9123935972202945593e1ac7e1a33f497fe4ac4496afc9ac8e7a0e5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:11 np0005601226 ovn_controller[145556]: 2026-01-29T17:17:11Z|00035|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 29 12:17:11 np0005601226 podman[247269]: 2026-01-29 17:17:11.398610508 +0000 UTC m=+1.014522035 container init 89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:17:11 np0005601226 podman[247269]: 2026-01-29 17:17:11.406571515 +0000 UTC m=+1.022483022 container start 89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 12:17:11 np0005601226 podman[247269]: 2026-01-29 17:17:11.606710234 +0000 UTC m=+1.222621771 container attach 89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:17:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 53 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 1.5 MiB/s wr, 90 op/s
Jan 29 12:17:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:11 np0005601226 exciting_benz[247286]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:17:11 np0005601226 exciting_benz[247286]: --> All data devices are unavailable
Jan 29 12:17:11 np0005601226 systemd[1]: libpod-89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e.scope: Deactivated successfully.
Jan 29 12:17:11 np0005601226 podman[247306]: 2026-01-29 17:17:11.830877079 +0000 UTC m=+0.020813509 container died 89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:17:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a91c3209c9123935972202945593e1ac7e1a33f497fe4ac4496afc9ac8e7a0e5-merged.mount: Deactivated successfully.
Jan 29 12:17:13 np0005601226 nova_compute[239456]: 2026-01-29 17:17:13.068 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 161 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 15 MiB/s wr, 87 op/s
Jan 29 12:17:14 np0005601226 podman[247306]: 2026-01-29 17:17:14.06200769 +0000 UTC m=+2.251944120 container remove 89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_benz, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 12:17:14 np0005601226 systemd[1]: libpod-conmon-89afd9b7af97fde2b2600fc4cb0cd89e8ad3df6087b0d8c2c9aa2ac42e9d8b8e.scope: Deactivated successfully.
Jan 29 12:17:14 np0005601226 podman[247382]: 2026-01-29 17:17:14.41722406 +0000 UTC m=+0.018788923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:17:14 np0005601226 podman[247382]: 2026-01-29 17:17:14.798549951 +0000 UTC m=+0.400114794 container create f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:17:14 np0005601226 systemd[1]: Started libpod-conmon-f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051.scope.
Jan 29 12:17:15 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:17:15 np0005601226 podman[247382]: 2026-01-29 17:17:15.519186429 +0000 UTC m=+1.120751302 container init f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:17:15 np0005601226 podman[247382]: 2026-01-29 17:17:15.525132501 +0000 UTC m=+1.126697344 container start f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_yalow, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:17:15 np0005601226 jolly_yalow[247399]: 167 167
Jan 29 12:17:15 np0005601226 systemd[1]: libpod-f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051.scope: Deactivated successfully.
Jan 29 12:17:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 237 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 25 MiB/s wr, 116 op/s
Jan 29 12:17:15 np0005601226 podman[247382]: 2026-01-29 17:17:15.639177423 +0000 UTC m=+1.240742286 container attach f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_yalow, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:17:15 np0005601226 podman[247382]: 2026-01-29 17:17:15.640250312 +0000 UTC m=+1.241815175 container died f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_yalow, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:17:15 np0005601226 nova_compute[239456]: 2026-01-29 17:17:15.711 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 29 12:17:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 237 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 20 MiB/s wr, 94 op/s
Jan 29 12:17:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c298b3c0365955f219db1b0c013274e0dba226437b746f036c72cee767810888-merged.mount: Deactivated successfully.
Jan 29 12:17:18 np0005601226 nova_compute[239456]: 2026-01-29 17:17:18.070 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 29 12:17:18 np0005601226 nova_compute[239456]: 2026-01-29 17:17:18.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 29 12:17:18 np0005601226 podman[247382]: 2026-01-29 17:17:18.806792618 +0000 UTC m=+4.408357461 container remove f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:17:18 np0005601226 systemd[1]: libpod-conmon-f5595c9b138e4f7d52ce3fcf08dafec6b3c8e780581f990d06c11527bcdff051.scope: Deactivated successfully.
Jan 29 12:17:19 np0005601226 podman[247421]: 2026-01-29 17:17:18.908024069 +0000 UTC m=+0.028317663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:17:19 np0005601226 podman[247421]: 2026-01-29 17:17:19.417714823 +0000 UTC m=+0.538008397 container create e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kepler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:17:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 29 12:17:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 297 MiB data, 488 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 26 MiB/s wr, 82 op/s
Jan 29 12:17:19 np0005601226 systemd[1]: Started libpod-conmon-e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff.scope.
Jan 29 12:17:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:17:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeba741529a0a0cde9a37065fec1f0e666ea69f6451588c97099278da62b4f3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeba741529a0a0cde9a37065fec1f0e666ea69f6451588c97099278da62b4f3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeba741529a0a0cde9a37065fec1f0e666ea69f6451588c97099278da62b4f3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeba741529a0a0cde9a37065fec1f0e666ea69f6451588c97099278da62b4f3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:20 np0005601226 podman[247421]: 2026-01-29 17:17:20.440688098 +0000 UTC m=+1.560981702 container init e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kepler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:17:20 np0005601226 podman[247421]: 2026-01-29 17:17:20.44625679 +0000 UTC m=+1.566550364 container start e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:17:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 29 12:17:20 np0005601226 nova_compute[239456]: 2026-01-29 17:17:20.613 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:20 np0005601226 podman[247421]: 2026-01-29 17:17:20.654337706 +0000 UTC m=+1.774631290 container attach e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kepler, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]: {
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:    "0": [
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:        {
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "devices": [
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "/dev/loop3"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            ],
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_name": "ceph_lv0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_size": "21470642176",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "name": "ceph_lv0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "tags": {
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cluster_name": "ceph",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.crush_device_class": "",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.encrypted": "0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.objectstore": "bluestore",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osd_id": "0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.type": "block",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.vdo": "0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.with_tpm": "0"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            },
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "type": "block",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "vg_name": "ceph_vg0"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:        }
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:    ],
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:    "1": [
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:        {
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "devices": [
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "/dev/loop4"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            ],
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_name": "ceph_lv1",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_size": "21470642176",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "name": "ceph_lv1",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "tags": {
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cluster_name": "ceph",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.crush_device_class": "",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.encrypted": "0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.objectstore": "bluestore",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osd_id": "1",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.type": "block",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.vdo": "0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.with_tpm": "0"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            },
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "type": "block",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "vg_name": "ceph_vg1"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:        }
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:    ],
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:    "2": [
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:        {
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "devices": [
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "/dev/loop5"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            ],
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_name": "ceph_lv2",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_size": "21470642176",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "name": "ceph_lv2",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "tags": {
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.cluster_name": "ceph",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.crush_device_class": "",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.encrypted": "0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.objectstore": "bluestore",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osd_id": "2",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.type": "block",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.vdo": "0",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:                "ceph.with_tpm": "0"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            },
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "type": "block",
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:            "vg_name": "ceph_vg2"
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:        }
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]:    ]
Jan 29 12:17:20 np0005601226 vigilant_kepler[247438]: }
Jan 29 12:17:20 np0005601226 nova_compute[239456]: 2026-01-29 17:17:20.712 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 29 12:17:20 np0005601226 systemd[1]: libpod-e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff.scope: Deactivated successfully.
Jan 29 12:17:20 np0005601226 podman[247421]: 2026-01-29 17:17:20.733262569 +0000 UTC m=+1.853556143 container died e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kepler, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 12:17:21 np0005601226 nova_compute[239456]: 2026-01-29 17:17:21.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:21 np0005601226 nova_compute[239456]: 2026-01-29 17:17:21.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 29 12:17:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 301 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 18 MiB/s wr, 52 op/s
Jan 29 12:17:21 np0005601226 nova_compute[239456]: 2026-01-29 17:17:21.629 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 29 12:17:21 np0005601226 nova_compute[239456]: 2026-01-29 17:17:21.630 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:21 np0005601226 nova_compute[239456]: 2026-01-29 17:17:21.630 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 29 12:17:21 np0005601226 systemd[1]: var-lib-containers-storage-overlay-aeba741529a0a0cde9a37065fec1f0e666ea69f6451588c97099278da62b4f3a-merged.mount: Deactivated successfully.
Jan 29 12:17:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:22 np0005601226 nova_compute[239456]: 2026-01-29 17:17:22.661 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:22 np0005601226 nova_compute[239456]: 2026-01-29 17:17:22.661 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:22 np0005601226 podman[247421]: 2026-01-29 17:17:22.889527977 +0000 UTC m=+4.009821581 container remove e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:17:22 np0005601226 systemd[1]: libpod-conmon-e7783dd495850e412b0d2928d860520ca89dd1f964e1a6143f16890317c67bff.scope: Deactivated successfully.
Jan 29 12:17:23 np0005601226 nova_compute[239456]: 2026-01-29 17:17:23.074 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:23 np0005601226 podman[247521]: 2026-01-29 17:17:23.239314909 +0000 UTC m=+0.018288250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:17:23 np0005601226 podman[247521]: 2026-01-29 17:17:23.533553075 +0000 UTC m=+0.312526366 container create 5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 12:17:23 np0005601226 nova_compute[239456]: 2026-01-29 17:17:23.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 385 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 18 MiB/s wr, 31 op/s
Jan 29 12:17:23 np0005601226 nova_compute[239456]: 2026-01-29 17:17:23.629 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:17:23 np0005601226 nova_compute[239456]: 2026-01-29 17:17:23.629 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:17:23 np0005601226 nova_compute[239456]: 2026-01-29 17:17:23.630 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:17:23 np0005601226 nova_compute[239456]: 2026-01-29 17:17:23.630 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:17:23 np0005601226 nova_compute[239456]: 2026-01-29 17:17:23.630 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:17:23 np0005601226 systemd[1]: Started libpod-conmon-5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b.scope.
Jan 29 12:17:23 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:17:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:17:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607250075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.148 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.293 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.294 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4735MB free_disk=59.98827403783798GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.295 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.295 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:17:24 np0005601226 podman[247521]: 2026-01-29 17:17:24.31054185 +0000 UTC m=+1.089515171 container init 5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:17:24 np0005601226 podman[247521]: 2026-01-29 17:17:24.315765233 +0000 UTC m=+1.094738534 container start 5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_ramanujan, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 12:17:24 np0005601226 trusting_ramanujan[247557]: 167 167
Jan 29 12:17:24 np0005601226 systemd[1]: libpod-5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b.scope: Deactivated successfully.
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.437 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.437 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.455 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:17:24 np0005601226 podman[247521]: 2026-01-29 17:17:24.751406706 +0000 UTC m=+1.530380037 container attach 5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_ramanujan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 12:17:24 np0005601226 podman[247521]: 2026-01-29 17:17:24.752610929 +0000 UTC m=+1.531584230 container died 5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:17:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:17:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2242785380' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.982 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:17:24 np0005601226 nova_compute[239456]: 2026-01-29 17:17:24.986 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:17:25 np0005601226 nova_compute[239456]: 2026-01-29 17:17:25.092 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:17:25 np0005601226 nova_compute[239456]: 2026-01-29 17:17:25.094 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:17:25 np0005601226 nova_compute[239456]: 2026-01-29 17:17:25.094 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:17:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 413 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 MiB/s wr, 59 op/s
Jan 29 12:17:25 np0005601226 nova_compute[239456]: 2026-01-29 17:17:25.713 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:26 np0005601226 nova_compute[239456]: 2026-01-29 17:17:26.095 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:26 np0005601226 nova_compute[239456]: 2026-01-29 17:17:26.095 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:17:26 np0005601226 nova_compute[239456]: 2026-01-29 17:17:26.095 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:17:26 np0005601226 systemd[1]: var-lib-containers-storage-overlay-315ed14ce5f10fa0d9ef70e6aa7eb6484aa991c1588adf7a2f242975a6cddeda-merged.mount: Deactivated successfully.
Jan 29 12:17:26 np0005601226 nova_compute[239456]: 2026-01-29 17:17:26.187 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:17:26 np0005601226 nova_compute[239456]: 2026-01-29 17:17:26.187 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:26 np0005601226 nova_compute[239456]: 2026-01-29 17:17:26.188 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:26 np0005601226 nova_compute[239456]: 2026-01-29 17:17:26.691 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:26 np0005601226 podman[247521]: 2026-01-29 17:17:26.938014881 +0000 UTC m=+3.716988182 container remove 5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 12:17:26 np0005601226 systemd[1]: libpod-conmon-5999b15106f9696f170222c23abf341094be4d7839bc6a68e5363bd26154693b.scope: Deactivated successfully.
Jan 29 12:17:27 np0005601226 podman[247603]: 2026-01-29 17:17:27.044519327 +0000 UTC m=+0.022549617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:17:27 np0005601226 podman[247603]: 2026-01-29 17:17:27.347381587 +0000 UTC m=+0.325411857 container create 8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banzai, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:17:27 np0005601226 systemd[1]: Started libpod-conmon-8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28.scope.
Jan 29 12:17:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:17:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff86ea15958c2b6aec2ad6e40e8dc8689b235013468a71ee14c279ba4655323d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff86ea15958c2b6aec2ad6e40e8dc8689b235013468a71ee14c279ba4655323d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff86ea15958c2b6aec2ad6e40e8dc8689b235013468a71ee14c279ba4655323d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff86ea15958c2b6aec2ad6e40e8dc8689b235013468a71ee14c279ba4655323d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:17:27 np0005601226 podman[247603]: 2026-01-29 17:17:27.577073043 +0000 UTC m=+0.555103333 container init 8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:17:27 np0005601226 podman[247603]: 2026-01-29 17:17:27.582570443 +0000 UTC m=+0.560600713 container start 8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:17:27 np0005601226 nova_compute[239456]: 2026-01-29 17:17:27.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:27 np0005601226 nova_compute[239456]: 2026-01-29 17:17:27.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:17:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 413 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 17 MiB/s wr, 49 op/s
Jan 29 12:17:27 np0005601226 podman[247603]: 2026-01-29 17:17:27.685663495 +0000 UTC m=+0.663693775 container attach 8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banzai, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:17:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 29 12:17:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 29 12:17:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 29 12:17:28 np0005601226 nova_compute[239456]: 2026-01-29 17:17:28.076 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:28 np0005601226 lvm[247698]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:17:28 np0005601226 lvm[247698]: VG ceph_vg1 finished
Jan 29 12:17:28 np0005601226 lvm[247697]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:17:28 np0005601226 lvm[247697]: VG ceph_vg0 finished
Jan 29 12:17:28 np0005601226 lvm[247700]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:17:28 np0005601226 lvm[247700]: VG ceph_vg2 finished
Jan 29 12:17:28 np0005601226 hardcore_banzai[247619]: {}
Jan 29 12:17:28 np0005601226 systemd[1]: libpod-8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28.scope: Deactivated successfully.
Jan 29 12:17:28 np0005601226 systemd[1]: libpod-8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28.scope: Consumed 1.095s CPU time.
Jan 29 12:17:28 np0005601226 podman[247603]: 2026-01-29 17:17:28.502993089 +0000 UTC m=+1.481023359 container died 8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banzai, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:17:28 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ff86ea15958c2b6aec2ad6e40e8dc8689b235013468a71ee14c279ba4655323d-merged.mount: Deactivated successfully.
Jan 29 12:17:29 np0005601226 podman[247603]: 2026-01-29 17:17:29.094848423 +0000 UTC m=+2.072878683 container remove 8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:17:29 np0005601226 systemd[1]: libpod-conmon-8b17ee1bb9dc90f9b9eb130bbdb5045fa013bbf84152517eca00d600ffca4d28.scope: Deactivated successfully.
Jan 29 12:17:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:17:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:17:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:17:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:17:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 525 MiB data, 700 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 25 MiB/s wr, 45 op/s
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/319594663' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:17:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/319594663' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:17:30 np0005601226 nova_compute[239456]: 2026-01-29 17:17:30.716 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:31 np0005601226 nova_compute[239456]: 2026-01-29 17:17:31.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:17:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 621 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 30 MiB/s wr, 110 op/s
Jan 29 12:17:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 29 12:17:33 np0005601226 nova_compute[239456]: 2026-01-29 17:17:33.079 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 29 12:17:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 29 12:17:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 697 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 47 MiB/s wr, 118 op/s
Jan 29 12:17:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 961 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 65 MiB/s wr, 213 op/s
Jan 29 12:17:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:17:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1520106331' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:17:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:17:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1520106331' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:17:35 np0005601226 nova_compute[239456]: 2026-01-29 17:17:35.717 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:35 np0005601226 podman[247742]: 2026-01-29 17:17:35.910146409 +0000 UTC m=+0.071709437 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:17:35 np0005601226 podman[247741]: 2026-01-29 17:17:35.916143213 +0000 UTC m=+0.077559318 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:17:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 29 12:17:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 29 12:17:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 29 12:17:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 961 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 48 MiB/s wr, 139 op/s
Jan 29 12:17:38 np0005601226 nova_compute[239456]: 2026-01-29 17:17:38.080 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 29 12:17:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 29 12:17:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 29 12:17:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 717 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 55 MiB/s wr, 220 op/s
Jan 29 12:17:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 29 12:17:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:17:40.279 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:17:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:17:40.279 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:17:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:17:40.279 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:17:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 29 12:17:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:17:40
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['backups', 'images', 'volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms']
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:17:40 np0005601226 nova_compute[239456]: 2026-01-29 17:17:40.719 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:17:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:17:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 341 MiB data, 856 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 17 MiB/s wr, 160 op/s
Jan 29 12:17:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 29 12:17:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 29 12:17:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 29 12:17:43 np0005601226 nova_compute[239456]: 2026-01-29 17:17:43.124 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 41 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 17 MiB/s wr, 175 op/s
Jan 29 12:17:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 29 12:17:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 29 12:17:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 29 12:17:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 4.0 MiB/s wr, 188 op/s
Jan 29 12:17:45 np0005601226 nova_compute[239456]: 2026-01-29 17:17:45.720 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1426553611' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:17:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1426553611' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:17:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.7 KiB/s wr, 102 op/s
Jan 29 12:17:48 np0005601226 nova_compute[239456]: 2026-01-29 17:17:48.127 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:48 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:17:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.4 KiB/s wr, 123 op/s
Jan 29 12:17:50 np0005601226 nova_compute[239456]: 2026-01-29 17:17:50.721 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:17:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3181501453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:17:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:17:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3181501453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.2111111125683983e-07 of space, bias 1.0, pg target 3.633333337705195e-05 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.455836294156434e-06 of space, bias 1.0, pg target 0.00133675088824693 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2036590764956788e-07 of space, bias 1.0, pg target 3.6109772294870364e-05 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659307881647343 of space, bias 1.0, pg target 0.1997792364494203 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2490854463889634e-06 of space, bias 4.0, pg target 0.001498902535666756 quantized to 16 (current 16)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:17:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 KiB/s wr, 107 op/s
Jan 29 12:17:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 29 12:17:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:17:52.559 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:17:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:17:52.559 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:17:52 np0005601226 nova_compute[239456]: 2026-01-29 17:17:52.650 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 29 12:17:53 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 29 12:17:53 np0005601226 nova_compute[239456]: 2026-01-29 17:17:53.128 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 KiB/s wr, 41 op/s
Jan 29 12:17:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 KiB/s wr, 70 op/s
Jan 29 12:17:55 np0005601226 nova_compute[239456]: 2026-01-29 17:17:55.723 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:17:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.7 KiB/s wr, 59 op/s
Jan 29 12:17:58 np0005601226 nova_compute[239456]: 2026-01-29 17:17:58.132 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:17:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 75 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Jan 29 12:18:00 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:00.562 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:00 np0005601226 nova_compute[239456]: 2026-01-29 17:18:00.724 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 29 12:18:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:03 np0005601226 nova_compute[239456]: 2026-01-29 17:18:03.173 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 48 op/s
Jan 29 12:18:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:18:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576984412' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:18:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:18:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576984412' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:18:04 np0005601226 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.395 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.396 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.411 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.492 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.493 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.501 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.502 239460 INFO nova.compute.claims [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.611 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 29 12:18:05 np0005601226 nova_compute[239456]: 2026-01-29 17:18:05.726 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:18:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/908808634' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.102 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.107 239460 DEBUG nova.compute.provider_tree [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.122 239460 DEBUG nova.scheduler.client.report [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.143 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.144 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.189 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.189 239460 DEBUG nova.network.neutron [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.213 239460 INFO nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.230 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.354 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.356 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.357 239460 INFO nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Creating image(s)#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.508 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.531 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.551 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.554 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.569 239460 DEBUG nova.policy [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa90bbad088947a2a9866efeb934031e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7140162c4cd744d38e65ad1bcdadf016', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.627 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.628 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.629 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.629 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.648 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:06 np0005601226 nova_compute[239456]: 2026-01-29 17:18:06.652 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:06 np0005601226 podman[247903]: 2026-01-29 17:18:06.872865455 +0000 UTC m=+0.043996092 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 12:18:06 np0005601226 podman[247904]: 2026-01-29 17:18:06.896657904 +0000 UTC m=+0.063436112 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:18:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:07 np0005601226 nova_compute[239456]: 2026-01-29 17:18:07.546 239460 DEBUG nova.network.neutron [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Successfully created port: d7e6c36c-4b5a-4578-af9a-56118f94ffc5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:18:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.175 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.246 239460 DEBUG nova.network.neutron [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Successfully updated port: d7e6c36c-4b5a-4578-af9a-56118f94ffc5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.263 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.264 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquired lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.264 239460 DEBUG nova.network.neutron [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.334 239460 DEBUG nova.compute.manager [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-changed-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.335 239460 DEBUG nova.compute.manager [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Refreshing instance network info cache due to event network-changed-d7e6c36c-4b5a-4578-af9a-56118f94ffc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:18:08 np0005601226 nova_compute[239456]: 2026-01-29 17:18:08.335 239460 DEBUG oslo_concurrency.lockutils [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:18:09 np0005601226 nova_compute[239456]: 2026-01-29 17:18:09.409 239460 DEBUG nova.network.neutron [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:18:09 np0005601226 nova_compute[239456]: 2026-01-29 17:18:09.497 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.846s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:09 np0005601226 nova_compute[239456]: 2026-01-29 17:18:09.543 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] resizing rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:18:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 105 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Jan 29 12:18:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:18:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:18:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:18:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:18:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:18:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.730 239460 DEBUG nova.network.neutron [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updating instance_info_cache with network_info: [{"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.732 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.737 239460 DEBUG nova.objects.instance [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'migration_context' on Instance uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.752 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.753 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Ensure instance console log exists: /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.753 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.753 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.753 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.754 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Releasing lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.754 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Instance network_info: |[{"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.755 239460 DEBUG oslo_concurrency.lockutils [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.755 239460 DEBUG nova.network.neutron [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Refreshing network info cache for port d7e6c36c-4b5a-4578-af9a-56118f94ffc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.757 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Start _get_guest_xml network_info=[{"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.761 239460 WARNING nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.767 239460 DEBUG nova.virt.libvirt.host [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.767 239460 DEBUG nova.virt.libvirt.host [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.770 239460 DEBUG nova.virt.libvirt.host [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.770 239460 DEBUG nova.virt.libvirt.host [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.771 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.771 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.771 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.772 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.772 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.772 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.772 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.772 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.773 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.773 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.773 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.773 239460 DEBUG nova.virt.hardware [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:18:10 np0005601226 nova_compute[239456]: 2026-01-29 17:18:10.776 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1323341212' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.282 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.421 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.424 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 114 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.6 MiB/s wr, 37 op/s
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/858739161' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/858739161' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:18:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1103422500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.976 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.978 239460 DEBUG nova.virt.libvirt.vif [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:18:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1490031336',display_name='tempest-VolumesSnapshotTestJSON-instance-1490031336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1490031336',id=2,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKHUGsj9+rSHi2zYIrYlM5voP+SmeT8NPhKY2BWeEM0EvzN2A8jyIT0940OO1F9cpE1qyu/IQNauLfUufkcWbrGzw7QiYx+LgXRK8QgzdytsLW01R2lsc5ReoRFmrt9CUA==',key_name='tempest-keypair-219750126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7140162c4cd744d38e65ad1bcdadf016',ramdisk_id='',reservation_id='r-zoqdgb1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-783985999',owner_user_name='tempest-VolumesSnapshotTestJSON-783985999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:18:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa90bbad088947a2a9866efeb934031e',uuid=f0dce8a1-b2b9-49db-8805-fd9b75fed5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.978 239460 DEBUG nova.network.os_vif_util [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converting VIF {"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.979 239460 DEBUG nova.network.os_vif_util [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ce:44,bridge_name='br-int',has_traffic_filtering=True,id=d7e6c36c-4b5a-4578-af9a-56118f94ffc5,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7e6c36c-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.980 239460 DEBUG nova.objects.instance [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:11 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.992 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <uuid>f0dce8a1-b2b9-49db-8805-fd9b75fed5b5</uuid>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <name>instance-00000002</name>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-1490031336</nova:name>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:18:10</nova:creationTime>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:user uuid="aa90bbad088947a2a9866efeb934031e">tempest-VolumesSnapshotTestJSON-783985999-project-member</nova:user>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:project uuid="7140162c4cd744d38e65ad1bcdadf016">tempest-VolumesSnapshotTestJSON-783985999</nova:project>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <nova:port uuid="d7e6c36c-4b5a-4578-af9a-56118f94ffc5">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <entry name="serial">f0dce8a1-b2b9-49db-8805-fd9b75fed5b5</entry>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <entry name="uuid">f0dce8a1-b2b9-49db-8805-fd9b75fed5b5</entry>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk.config">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:d5:ce:44"/>
Jan 29 12:18:11 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:      <target dev="tapd7e6c36c-4b"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:18:12 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/console.log" append="off"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:18:12 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:18:12 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:18:12 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:18:12 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:18:12 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.993 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Preparing to wait for external event network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.994 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.994 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.994 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.995 239460 DEBUG nova.virt.libvirt.vif [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:18:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1490031336',display_name='tempest-VolumesSnapshotTestJSON-instance-1490031336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1490031336',id=2,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKHUGsj9+rSHi2zYIrYlM5voP+SmeT8NPhKY2BWeEM0EvzN2A8jyIT0940OO1F9cpE1qyu/IQNauLfUufkcWbrGzw7QiYx+LgXRK8QgzdytsLW01R2lsc5ReoRFmrt9CUA==',key_name='tempest-keypair-219750126',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7140162c4cd744d38e65ad1bcdadf016',ramdisk_id='',reservation_id='r-zoqdgb1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-783985999',owner_user_name='tempest-VolumesSnapshotTestJSON-783985999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:18:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa90bbad088947a2a9866efeb934031e',uuid=f0dce8a1-b2b9-49db-8805-fd9b75fed5b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.995 239460 DEBUG nova.network.os_vif_util [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converting VIF {"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.995 239460 DEBUG nova.network.os_vif_util [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ce:44,bridge_name='br-int',has_traffic_filtering=True,id=d7e6c36c-4b5a-4578-af9a-56118f94ffc5,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7e6c36c-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.996 239460 DEBUG os_vif [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ce:44,bridge_name='br-int',has_traffic_filtering=True,id=d7e6c36c-4b5a-4578-af9a-56118f94ffc5,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7e6c36c-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.996 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.997 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.997 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:11.999 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.000 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7e6c36c-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.000 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7e6c36c-4b, col_values=(('external_ids', {'iface-id': 'd7e6c36c-4b5a-4578-af9a-56118f94ffc5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:ce:44', 'vm-uuid': 'f0dce8a1-b2b9-49db-8805-fd9b75fed5b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.002 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:12 np0005601226 NetworkManager[49020]: <info>  [1769707092.0026] manager: (tapd7e6c36c-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.004 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.007 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.008 239460 INFO os_vif [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:ce:44,bridge_name='br-int',has_traffic_filtering=True,id=d7e6c36c-4b5a-4578-af9a-56118f94ffc5,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7e6c36c-4b')#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.047 239460 DEBUG nova.network.neutron [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updated VIF entry in instance network info cache for port d7e6c36c-4b5a-4578-af9a-56118f94ffc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.047 239460 DEBUG nova.network.neutron [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updating instance_info_cache with network_info: [{"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.063 239460 DEBUG oslo_concurrency.lockutils [req-41c6fdf8-698c-464b-8aec-5b304786ec78 req-2ad6cb52-056a-4427-83dd-2ea280e26aa7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.210 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.211 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.211 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No VIF found with MAC fa:16:3e:d5:ce:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.211 239460 INFO nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Using config drive#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.232 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.481 239460 INFO nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Creating config drive at /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/disk.config#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.485 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiw1u8__v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.601 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiw1u8__v" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.639 239460 DEBUG nova.storage.rbd_utils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:12 np0005601226 nova_compute[239456]: 2026-01-29 17:18:12.644 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/disk.config f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 130 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 MiB/s wr, 38 op/s
Jan 29 12:18:14 np0005601226 nova_compute[239456]: 2026-01-29 17:18:14.827 239460 DEBUG oslo_concurrency.processutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/disk.config f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:14 np0005601226 nova_compute[239456]: 2026-01-29 17:18:14.828 239460 INFO nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Deleting local config drive /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5/disk.config because it was imported into RBD.#033[00m
Jan 29 12:18:14 np0005601226 systemd[1]: Starting libvirt secret daemon...
Jan 29 12:18:14 np0005601226 systemd[1]: Started libvirt secret daemon.
Jan 29 12:18:14 np0005601226 kernel: tapd7e6c36c-4b: entered promiscuous mode
Jan 29 12:18:14 np0005601226 NetworkManager[49020]: <info>  [1769707094.8897] manager: (tapd7e6c36c-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Jan 29 12:18:14 np0005601226 systemd-udevd[248175]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:18:14 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:14Z|00036|binding|INFO|Claiming lport d7e6c36c-4b5a-4578-af9a-56118f94ffc5 for this chassis.
Jan 29 12:18:14 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:14Z|00037|binding|INFO|d7e6c36c-4b5a-4578-af9a-56118f94ffc5: Claiming fa:16:3e:d5:ce:44 10.100.0.13
Jan 29 12:18:14 np0005601226 nova_compute[239456]: 2026-01-29 17:18:14.929 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:14 np0005601226 nova_compute[239456]: 2026-01-29 17:18:14.932 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:14 np0005601226 NetworkManager[49020]: <info>  [1769707094.9413] device (tapd7e6c36c-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:18:14 np0005601226 NetworkManager[49020]: <info>  [1769707094.9419] device (tapd7e6c36c-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.940 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:ce:44 10.100.0.13'], port_security=['fa:16:3e:d5:ce:44 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0dce8a1-b2b9-49db-8805-fd9b75fed5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c65d65e6-04af-4892-ad96-3d83d148450f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7140162c4cd744d38e65ad1bcdadf016', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f027cde-583c-43d4-9cd2-5ffabc54095e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5702b8d-5b0f-4c7d-bc4d-4e202a7e2b31, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=d7e6c36c-4b5a-4578-af9a-56118f94ffc5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.941 155625 INFO neutron.agent.ovn.metadata.agent [-] Port d7e6c36c-4b5a-4578-af9a-56118f94ffc5 in datapath c65d65e6-04af-4892-ad96-3d83d148450f bound to our chassis#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.943 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c65d65e6-04af-4892-ad96-3d83d148450f#033[00m
Jan 29 12:18:14 np0005601226 systemd-machined[207561]: New machine qemu-2-instance-00000002.
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.951 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3aa45825-849e-4abe-b060-fd54779ebd1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.952 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc65d65e6-01 in ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.953 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc65d65e6-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.954 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4e946e2f-86ed-46d2-bf50-e6548acf4d7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.954 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[450f7c4a-1b5d-42c0-8235-f7db884135e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:14 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:14Z|00038|binding|INFO|Setting lport d7e6c36c-4b5a-4578-af9a-56118f94ffc5 ovn-installed in OVS
Jan 29 12:18:14 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:14Z|00039|binding|INFO|Setting lport d7e6c36c-4b5a-4578-af9a-56118f94ffc5 up in Southbound
Jan 29 12:18:14 np0005601226 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 29 12:18:14 np0005601226 nova_compute[239456]: 2026-01-29 17:18:14.963 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.965 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[57c4e205-9856-4622-86c8-c5a5e36d99ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:14.990 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8bd634-a2a8-46a7-92bc-bf3b6bf2a4b1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.010 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[2849a657-3dfa-4d0b-95b0-912eb38f5784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.014 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8daba1ef-ca94-47d8-b963-33f9fb2fdb2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 systemd-udevd[248179]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:18:15 np0005601226 NetworkManager[49020]: <info>  [1769707095.0153] manager: (tapc65d65e6-00): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.035 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[03c66ecf-bbf4-4c7d-9306-b1562cc68442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.037 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd23419-56c7-4b0b-9e6b-a286945d20f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 NetworkManager[49020]: <info>  [1769707095.0541] device (tapc65d65e6-00): carrier: link connected
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.055 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[0b05655a-7e2f-4151-8bec-a04ea47ab58f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.070 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d6413b3e-caa7-4f92-89ca-cfb60673c435]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc65d65e6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:66:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444855, 'reachable_time': 21081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248210, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.084 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[78b9979e-1870-4c07-b9c6-ac774324b49d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:66d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444855, 'tstamp': 444855}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248211, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.097 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[77e7ad20-9f46-400e-b768-91a0b10f5e73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc65d65e6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:66:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444855, 'reachable_time': 21081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248212, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.120 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[44cf1cc4-d0e3-4a35-b631-2dd2a5d201e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.158 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8b49fb75-653f-4c90-8fcb-8897d6624d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.159 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc65d65e6-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.159 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.160 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc65d65e6-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:15 np0005601226 kernel: tapc65d65e6-00: entered promiscuous mode
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.161 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:15 np0005601226 NetworkManager[49020]: <info>  [1769707095.1625] manager: (tapc65d65e6-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.165 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc65d65e6-00, col_values=(('external_ids', {'iface-id': '56fcfe53-391b-4f05-a182-2812cd40a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:15 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:15Z|00040|binding|INFO|Releasing lport 56fcfe53-391b-4f05-a182-2812cd40a46e from this chassis (sb_readonly=0)
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.165 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.166 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.168 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c65d65e6-04af-4892-ad96-3d83d148450f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c65d65e6-04af-4892-ad96-3d83d148450f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.169 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[09f332ee-2261-4511-ba31-58d680eee21b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.170 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-c65d65e6-04af-4892-ad96-3d83d148450f
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/c65d65e6-04af-4892-ad96-3d83d148450f.pid.haproxy
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID c65d65e6-04af-4892-ad96-3d83d148450f
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:18:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:15.170 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'env', 'PROCESS_TAG=haproxy-c65d65e6-04af-4892-ad96-3d83d148450f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c65d65e6-04af-4892-ad96-3d83d148450f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.170 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:15 np0005601226 podman[248245]: 2026-01-29 17:18:15.470458078 +0000 UTC m=+0.019637207 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:18:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.728 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.845 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.868 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Triggering sync for uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.868 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.916 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707095.9157672, f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.916 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] VM Started (Lifecycle Event)#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.941 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.945 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707095.9159849, f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.945 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.974 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:15 np0005601226 nova_compute[239456]: 2026-01-29 17:18:15.977 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:18:16 np0005601226 nova_compute[239456]: 2026-01-29 17:18:16.010 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:18:16 np0005601226 podman[248245]: 2026-01-29 17:18:16.48312401 +0000 UTC m=+1.032303119 container create 3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:18:16 np0005601226 systemd[1]: Started libpod-conmon-3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96.scope.
Jan 29 12:18:16 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:16 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50dc4c2bcaa1adb3ccbf825d818226c164e63732f13af2b574f54a96483bac2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.003 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.484 239460 DEBUG nova.compute.manager [req-f26254dc-0168-494d-a231-1c41bbd4a941 req-29846b1b-f398-4baf-b24e-87673db321af 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.485 239460 DEBUG oslo_concurrency.lockutils [req-f26254dc-0168-494d-a231-1c41bbd4a941 req-29846b1b-f398-4baf-b24e-87673db321af 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.485 239460 DEBUG oslo_concurrency.lockutils [req-f26254dc-0168-494d-a231-1c41bbd4a941 req-29846b1b-f398-4baf-b24e-87673db321af 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.485 239460 DEBUG oslo_concurrency.lockutils [req-f26254dc-0168-494d-a231-1c41bbd4a941 req-29846b1b-f398-4baf-b24e-87673db321af 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.486 239460 DEBUG nova.compute.manager [req-f26254dc-0168-494d-a231-1c41bbd4a941 req-29846b1b-f398-4baf-b24e-87673db321af 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Processing event network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.487 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.490 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707097.490578, f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.491 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.493 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.496 239460 INFO nova.virt.libvirt.driver [-] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Instance spawned successfully.#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.497 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.530 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.534 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.581 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.589 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.589 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.590 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.591 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.591 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.592 239460 DEBUG nova.virt.libvirt.driver [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.642 239460 INFO nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Took 11.29 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.642 239460 DEBUG nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.698 239460 INFO nova.compute.manager [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Took 12.23 seconds to build instance.#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.823 239460 DEBUG oslo_concurrency.lockutils [None req-a8f62766-bc7f-4fab-98c0-55b149cc33a7 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.824 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.824 239460 INFO nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:18:17 np0005601226 nova_compute[239456]: 2026-01-29 17:18:17.824 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:17 np0005601226 podman[248245]: 2026-01-29 17:18:17.938311635 +0000 UTC m=+2.487490744 container init 3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:18:17 np0005601226 podman[248245]: 2026-01-29 17:18:17.949068218 +0000 UTC m=+2.498247317 container start 3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:18:17 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [NOTICE]   (248306) : New worker (248308) forked
Jan 29 12:18:17 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [NOTICE]   (248306) : Loading success.
Jan 29 12:18:19 np0005601226 nova_compute[239456]: 2026-01-29 17:18:19.578 239460 DEBUG nova.compute.manager [req-eb3fee3a-817a-4e18-9a81-135b581ece8e req-18fd3705-255a-456b-8b75-8aec3d30ef9a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:19 np0005601226 nova_compute[239456]: 2026-01-29 17:18:19.578 239460 DEBUG oslo_concurrency.lockutils [req-eb3fee3a-817a-4e18-9a81-135b581ece8e req-18fd3705-255a-456b-8b75-8aec3d30ef9a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:19 np0005601226 nova_compute[239456]: 2026-01-29 17:18:19.579 239460 DEBUG oslo_concurrency.lockutils [req-eb3fee3a-817a-4e18-9a81-135b581ece8e req-18fd3705-255a-456b-8b75-8aec3d30ef9a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:19 np0005601226 nova_compute[239456]: 2026-01-29 17:18:19.579 239460 DEBUG oslo_concurrency.lockutils [req-eb3fee3a-817a-4e18-9a81-135b581ece8e req-18fd3705-255a-456b-8b75-8aec3d30ef9a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:19 np0005601226 nova_compute[239456]: 2026-01-29 17:18:19.580 239460 DEBUG nova.compute.manager [req-eb3fee3a-817a-4e18-9a81-135b581ece8e req-18fd3705-255a-456b-8b75-8aec3d30ef9a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] No waiting events found dispatching network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:18:19 np0005601226 nova_compute[239456]: 2026-01-29 17:18:19.580 239460 WARNING nova.compute.manager [req-eb3fee3a-817a-4e18-9a81-135b581ece8e req-18fd3705-255a-456b-8b75-8aec3d30ef9a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received unexpected event network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:18:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 29 12:18:20 np0005601226 nova_compute[239456]: 2026-01-29 17:18:20.757 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:20Z|00041|binding|INFO|Releasing lport 56fcfe53-391b-4f05-a182-2812cd40a46e from this chassis (sb_readonly=0)
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9696] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Jan 29 12:18:20 np0005601226 nova_compute[239456]: 2026-01-29 17:18:20.968 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9710] device (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <warn>  [1769707100.9711] device (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9725] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9731] device (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <warn>  [1769707100.9732] device (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9744] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9754] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9761] device (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 29 12:18:20 np0005601226 NetworkManager[49020]: <info>  [1769707100.9767] device (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 29 12:18:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:20Z|00042|binding|INFO|Releasing lport 56fcfe53-391b-4f05-a182-2812cd40a46e from this chassis (sb_readonly=0)
Jan 29 12:18:20 np0005601226 nova_compute[239456]: 2026-01-29 17:18:20.984 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:20 np0005601226 nova_compute[239456]: 2026-01-29 17:18:20.988 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 84 op/s
Jan 29 12:18:21 np0005601226 nova_compute[239456]: 2026-01-29 17:18:21.808 239460 DEBUG nova.compute.manager [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-changed-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:21 np0005601226 nova_compute[239456]: 2026-01-29 17:18:21.809 239460 DEBUG nova.compute.manager [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Refreshing instance network info cache due to event network-changed-d7e6c36c-4b5a-4578-af9a-56118f94ffc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:18:21 np0005601226 nova_compute[239456]: 2026-01-29 17:18:21.809 239460 DEBUG oslo_concurrency.lockutils [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:18:21 np0005601226 nova_compute[239456]: 2026-01-29 17:18:21.810 239460 DEBUG oslo_concurrency.lockutils [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:18:21 np0005601226 nova_compute[239456]: 2026-01-29 17:18:21.811 239460 DEBUG nova.network.neutron [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Refreshing network info cache for port d7e6c36c-4b5a-4578-af9a-56118f94ffc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:18:22 np0005601226 nova_compute[239456]: 2026-01-29 17:18:22.006 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.442 239460 DEBUG nova.network.neutron [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updated VIF entry in instance network info cache for port d7e6c36c-4b5a-4578-af9a-56118f94ffc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.442 239460 DEBUG nova.network.neutron [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updating instance_info_cache with network_info: [{"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.460 239460 DEBUG oslo_concurrency.lockutils [req-52a4a89d-3f36-4a6e-998c-179a68d70137 req-b70b1f61-01b7-41bb-9d22-7ce452c7f6c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.627 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.628 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:18:23 np0005601226 nova_compute[239456]: 2026-01-29 17:18:23.628 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 724 KiB/s wr, 94 op/s
Jan 29 12:18:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:18:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518309236' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.401 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.773s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.482 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.482 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.615 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.616 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4633MB free_disk=59.96734970621765GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.616 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.616 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.831 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.831 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.831 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:18:24 np0005601226 nova_compute[239456]: 2026-01-29 17:18:24.958 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:18:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921497573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:18:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 271 KiB/s wr, 91 op/s
Jan 29 12:18:25 np0005601226 nova_compute[239456]: 2026-01-29 17:18:25.647 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:25 np0005601226 nova_compute[239456]: 2026-01-29 17:18:25.653 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:18:25 np0005601226 nova_compute[239456]: 2026-01-29 17:18:25.674 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:18:25 np0005601226 nova_compute[239456]: 2026-01-29 17:18:25.698 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:18:25 np0005601226 nova_compute[239456]: 2026-01-29 17:18:25.699 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:25 np0005601226 nova_compute[239456]: 2026-01-29 17:18:25.758 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.699 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.700 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.700 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.700 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.875 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.876 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.876 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.876 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:26 np0005601226 nova_compute[239456]: 2026-01-29 17:18:26.915 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:27 np0005601226 nova_compute[239456]: 2026-01-29 17:18:27.008 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 72 op/s
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.202 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updating instance_info_cache with network_info: [{"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.239 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.239 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.240 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.240 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.241 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.241 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:28 np0005601226 nova_compute[239456]: 2026-01-29 17:18:28.241 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:18:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 136 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 381 KiB/s wr, 79 op/s
Jan 29 12:18:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:18:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:18:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:18:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:18:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.405065) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707110405116, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1111, "num_deletes": 253, "total_data_size": 1504119, "memory_usage": 1532448, "flush_reason": "Manual Compaction"}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 29 12:18:30 np0005601226 nova_compute[239456]: 2026-01-29 17:18:30.435 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707110460084, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1480874, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20337, "largest_seqno": 21447, "table_properties": {"data_size": 1475390, "index_size": 2879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12069, "raw_average_key_size": 20, "raw_value_size": 1464267, "raw_average_value_size": 2456, "num_data_blocks": 129, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707027, "oldest_key_time": 1769707027, "file_creation_time": 1769707110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 55069 microseconds, and 2768 cpu microseconds.
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.460132) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1480874 bytes OK
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.460149) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.469559) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.469609) EVENT_LOG_v1 {"time_micros": 1769707110469600, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.469634) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1498873, prev total WAL file size 1533274, number of live WAL files 2.
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.470099) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1446KB)], [47(8273KB)]
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707110470124, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9953391, "oldest_snapshot_seqno": -1}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:18:30 np0005601226 podman[248509]: 2026-01-29 17:18:30.614637419 +0000 UTC m=+0.030059731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4603 keys, 8166084 bytes, temperature: kUnknown
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707110742729, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8166084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8133322, "index_size": 20172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11525, "raw_key_size": 114336, "raw_average_key_size": 24, "raw_value_size": 8048096, "raw_average_value_size": 1748, "num_data_blocks": 835, "num_entries": 4603, "num_filter_entries": 4603, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707110, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.742945) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8166084 bytes
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.797341) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 36.5 rd, 29.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.1 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(12.2) write-amplify(5.5) OK, records in: 5124, records dropped: 521 output_compression: NoCompression
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.797390) EVENT_LOG_v1 {"time_micros": 1769707110797370, "job": 24, "event": "compaction_finished", "compaction_time_micros": 272675, "compaction_time_cpu_micros": 14211, "output_level": 6, "num_output_files": 1, "total_output_size": 8166084, "num_input_records": 5124, "num_output_records": 4603, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707110797839, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707110799054, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.470050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.799136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.799142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.799144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.799146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:18:30.799148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:18:30 np0005601226 nova_compute[239456]: 2026-01-29 17:18:30.805 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/85647597' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:18:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/85647597' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:18:30 np0005601226 podman[248509]: 2026-01-29 17:18:30.991278423 +0000 UTC m=+0.406700715 container create fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:18:31 np0005601226 systemd[1]: Started libpod-conmon-fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70.scope.
Jan 29 12:18:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 146 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 962 KiB/s rd, 1.1 MiB/s wr, 46 op/s
Jan 29 12:18:31 np0005601226 podman[248509]: 2026-01-29 17:18:31.713403672 +0000 UTC m=+1.128825994 container init fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:18:31 np0005601226 podman[248509]: 2026-01-29 17:18:31.727153587 +0000 UTC m=+1.142575889 container start fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_cohen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:18:31 np0005601226 wonderful_cohen[248526]: 167 167
Jan 29 12:18:31 np0005601226 systemd[1]: libpod-fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70.scope: Deactivated successfully.
Jan 29 12:18:31 np0005601226 podman[248509]: 2026-01-29 17:18:31.974759471 +0000 UTC m=+1.390181853 container attach fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 12:18:31 np0005601226 podman[248509]: 2026-01-29 17:18:31.975878581 +0000 UTC m=+1.391300873 container died fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_cohen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:18:32 np0005601226 nova_compute[239456]: 2026-01-29 17:18:32.057 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-04a8cc83199d24d3b3f6f84bc124cb56d3be8ad25d65f774f2606639f9495931-merged.mount: Deactivated successfully.
Jan 29 12:18:33 np0005601226 podman[248509]: 2026-01-29 17:18:33.349335917 +0000 UTC m=+2.764758209 container remove fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_cohen, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:18:33 np0005601226 systemd[1]: libpod-conmon-fd33264f8159431811beb3d31917278f188ef60b5d9984661216f2b9aa63cf70.scope: Deactivated successfully.
Jan 29 12:18:33 np0005601226 podman[248552]: 2026-01-29 17:18:33.470183294 +0000 UTC m=+0.025486677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:18:33 np0005601226 nova_compute[239456]: 2026-01-29 17:18:33.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:18:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 148 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 1.5 MiB/s wr, 31 op/s
Jan 29 12:18:33 np0005601226 podman[248552]: 2026-01-29 17:18:33.755490235 +0000 UTC m=+0.310793648 container create 796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:18:33 np0005601226 systemd[1]: Started libpod-conmon-796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a.scope.
Jan 29 12:18:33 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92b9ce29a0dee3d310e19a1c186b1fb49dce1e2e688e533ee86d51f6b60aaaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92b9ce29a0dee3d310e19a1c186b1fb49dce1e2e688e533ee86d51f6b60aaaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92b9ce29a0dee3d310e19a1c186b1fb49dce1e2e688e533ee86d51f6b60aaaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92b9ce29a0dee3d310e19a1c186b1fb49dce1e2e688e533ee86d51f6b60aaaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:33 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d92b9ce29a0dee3d310e19a1c186b1fb49dce1e2e688e533ee86d51f6b60aaaa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:34 np0005601226 podman[248552]: 2026-01-29 17:18:34.139892981 +0000 UTC m=+0.695196464 container init 796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_curran, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:18:34 np0005601226 podman[248552]: 2026-01-29 17:18:34.147153079 +0000 UTC m=+0.702456462 container start 796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:18:34 np0005601226 podman[248552]: 2026-01-29 17:18:34.233853634 +0000 UTC m=+0.789157047 container attach 796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:18:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:34Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:ce:44 10.100.0.13
Jan 29 12:18:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:34Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:ce:44 10.100.0.13
Jan 29 12:18:34 np0005601226 peaceful_curran[248569]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:18:34 np0005601226 peaceful_curran[248569]: --> All data devices are unavailable
Jan 29 12:18:34 np0005601226 systemd[1]: libpod-796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a.scope: Deactivated successfully.
Jan 29 12:18:34 np0005601226 podman[248589]: 2026-01-29 17:18:34.745255344 +0000 UTC m=+0.039144169 container died 796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.420 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.421 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.440 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.516 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.517 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.527 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.528 239460 INFO nova.compute.claims [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.637 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 159 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.0 MiB/s wr, 51 op/s
Jan 29 12:18:35 np0005601226 nova_compute[239456]: 2026-01-29 17:18:35.859 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:36 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d92b9ce29a0dee3d310e19a1c186b1fb49dce1e2e688e533ee86d51f6b60aaaa-merged.mount: Deactivated successfully.
Jan 29 12:18:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:18:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3017317191' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.414 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.419 239460 DEBUG nova.compute.provider_tree [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.432 239460 DEBUG nova.scheduler.client.report [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.453 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.454 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.495 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.496 239460 DEBUG nova.network.neutron [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.515 239460 INFO nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.529 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.613 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.615 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.615 239460 INFO nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Creating image(s)#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.636 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.658 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.681 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.684 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.697 239460 DEBUG nova.policy [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '814a809cf2434fc5bdc86a907c6f923d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36b7f0db63d84c34b521603b194a3d9b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.728 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.729 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.730 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.730 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.750 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:36 np0005601226 nova_compute[239456]: 2026-01-29 17:18:36.753 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d e70499b1-fe73-43d6-b879-f6e0ab20b701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:37 np0005601226 nova_compute[239456]: 2026-01-29 17:18:37.099 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:37 np0005601226 nova_compute[239456]: 2026-01-29 17:18:37.335 239460 DEBUG nova.network.neutron [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Successfully created port: 093f0958-8f1b-4067-8692-210a7328406c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:18:37 np0005601226 podman[248589]: 2026-01-29 17:18:37.484776142 +0000 UTC m=+2.778664967 container remove 796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_curran, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:18:37 np0005601226 systemd[1]: libpod-conmon-796a8bfa3ff952ad6fdf4b84f664cc0f6c4c61b4fbccef3a101dc546d7925f9a.scope: Deactivated successfully.
Jan 29 12:18:37 np0005601226 podman[248717]: 2026-01-29 17:18:37.59503252 +0000 UTC m=+0.071057180 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:18:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 159 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 286 KiB/s rd, 2.0 MiB/s wr, 51 op/s
Jan 29 12:18:37 np0005601226 podman[248719]: 2026-01-29 17:18:37.642472534 +0000 UTC m=+0.118519334 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 12:18:37 np0005601226 podman[248824]: 2026-01-29 17:18:37.885817811 +0000 UTC m=+0.018832904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:18:38 np0005601226 podman[248824]: 2026-01-29 17:18:38.201569725 +0000 UTC m=+0.334584798 container create b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.366 239460 DEBUG nova.network.neutron [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Successfully updated port: 093f0958-8f1b-4067-8692-210a7328406c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.383 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "refresh_cache-e70499b1-fe73-43d6-b879-f6e0ab20b701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.383 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquired lock "refresh_cache-e70499b1-fe73-43d6-b879-f6e0ab20b701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.383 239460 DEBUG nova.network.neutron [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.463 239460 DEBUG nova.compute.manager [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received event network-changed-093f0958-8f1b-4067-8692-210a7328406c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.463 239460 DEBUG nova.compute.manager [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Refreshing instance network info cache due to event network-changed-093f0958-8f1b-4067-8692-210a7328406c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.464 239460 DEBUG oslo_concurrency.lockutils [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-e70499b1-fe73-43d6-b879-f6e0ab20b701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:18:38 np0005601226 nova_compute[239456]: 2026-01-29 17:18:38.509 239460 DEBUG nova.network.neutron [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:18:38 np0005601226 systemd[1]: Started libpod-conmon-b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c.scope.
Jan 29 12:18:38 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:39 np0005601226 nova_compute[239456]: 2026-01-29 17:18:39.087 239460 DEBUG nova.network.neutron [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Updating instance_info_cache with network_info: [{"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:18:39 np0005601226 nova_compute[239456]: 2026-01-29 17:18:39.108 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Releasing lock "refresh_cache-e70499b1-fe73-43d6-b879-f6e0ab20b701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:18:39 np0005601226 nova_compute[239456]: 2026-01-29 17:18:39.108 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Instance network_info: |[{"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:18:39 np0005601226 nova_compute[239456]: 2026-01-29 17:18:39.108 239460 DEBUG oslo_concurrency.lockutils [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-e70499b1-fe73-43d6-b879-f6e0ab20b701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:18:39 np0005601226 nova_compute[239456]: 2026-01-29 17:18:39.108 239460 DEBUG nova.network.neutron [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Refreshing network info cache for port 093f0958-8f1b-4067-8692-210a7328406c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:18:39 np0005601226 podman[248824]: 2026-01-29 17:18:39.627252044 +0000 UTC m=+1.760267157 container init b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:18:39 np0005601226 podman[248824]: 2026-01-29 17:18:39.636424334 +0000 UTC m=+1.769439447 container start b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:18:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 160 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 29 12:18:39 np0005601226 great_kepler[248840]: 167 167
Jan 29 12:18:39 np0005601226 systemd[1]: libpod-b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c.scope: Deactivated successfully.
Jan 29 12:18:39 np0005601226 podman[248824]: 2026-01-29 17:18:39.910230202 +0000 UTC m=+2.043245315 container attach b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:18:39 np0005601226 podman[248824]: 2026-01-29 17:18:39.910811149 +0000 UTC m=+2.043826242 container died b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:18:40 np0005601226 nova_compute[239456]: 2026-01-29 17:18:40.160 239460 DEBUG nova.network.neutron [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Updated VIF entry in instance network info cache for port 093f0958-8f1b-4067-8692-210a7328406c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:18:40 np0005601226 nova_compute[239456]: 2026-01-29 17:18:40.160 239460 DEBUG nova.network.neutron [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Updating instance_info_cache with network_info: [{"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:18:40 np0005601226 nova_compute[239456]: 2026-01-29 17:18:40.177 239460 DEBUG oslo_concurrency.lockutils [req-00180c17-acd0-4c96-a4b2-66da1457f0bb req-90551ca1-eb00-4a9e-a71c-ee0008920a37 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-e70499b1-fe73-43d6-b879-f6e0ab20b701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:18:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:40.280 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:40.281 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:40.282 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:18:40
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['vms', '.rgw.root', 'volumes', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:18:40 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4abd263e6a1b46e90d8b3d84590d09873504729700df9e86d5f7d19765aa4332-merged.mount: Deactivated successfully.
Jan 29 12:18:40 np0005601226 nova_compute[239456]: 2026-01-29 17:18:40.654 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d e70499b1-fe73-43d6-b879-f6e0ab20b701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.901s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:40 np0005601226 nova_compute[239456]: 2026-01-29 17:18:40.707 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] resizing rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:18:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:18:40 np0005601226 nova_compute[239456]: 2026-01-29 17:18:40.889 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:41 np0005601226 podman[248824]: 2026-01-29 17:18:41.28964102 +0000 UTC m=+3.422656093 container remove b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 12:18:41 np0005601226 systemd[1]: libpod-conmon-b1275e9a0b87011b844684f962fd46dd83e76c837bcb94624b7b4643fec5c28c.scope: Deactivated successfully.
Jan 29 12:18:41 np0005601226 podman[248918]: 2026-01-29 17:18:41.41352617 +0000 UTC m=+0.023286557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:18:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 181 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Jan 29 12:18:41 np0005601226 podman[248918]: 2026-01-29 17:18:41.835415818 +0000 UTC m=+0.445176185 container create 635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.925 239460 DEBUG nova.objects.instance [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lazy-loading 'migration_context' on Instance uuid e70499b1-fe73-43d6-b879-f6e0ab20b701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.940 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.941 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Ensure instance console log exists: /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.942 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.942 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.942 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.944 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Start _get_guest_xml network_info=[{"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.947 239460 WARNING nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.952 239460 DEBUG nova.virt.libvirt.host [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.954 239460 DEBUG nova.virt.libvirt.host [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.958 239460 DEBUG nova.virt.libvirt.host [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.958 239460 DEBUG nova.virt.libvirt.host [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.959 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.959 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.959 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.960 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.961 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.962 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.962 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.962 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.962 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.963 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.963 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.963 239460 DEBUG nova.virt.hardware [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:18:41 np0005601226 nova_compute[239456]: 2026-01-29 17:18:41.966 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:41 np0005601226 systemd[1]: Started libpod-conmon-635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c.scope.
Jan 29 12:18:42 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2677e5afba56707626164fe56a3cb316ec20714d87ba6a2d95b20cb2660b9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2677e5afba56707626164fe56a3cb316ec20714d87ba6a2d95b20cb2660b9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2677e5afba56707626164fe56a3cb316ec20714d87ba6a2d95b20cb2660b9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:42 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb2677e5afba56707626164fe56a3cb316ec20714d87ba6a2d95b20cb2660b9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:42 np0005601226 nova_compute[239456]: 2026-01-29 17:18:42.153 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:42 np0005601226 podman[248918]: 2026-01-29 17:18:42.308121272 +0000 UTC m=+0.917881679 container init 635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:18:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:42 np0005601226 podman[248918]: 2026-01-29 17:18:42.315829832 +0000 UTC m=+0.925590189 container start 635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_joliot, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:18:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:18:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/574214735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:18:42 np0005601226 nova_compute[239456]: 2026-01-29 17:18:42.465 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:42 np0005601226 nova_compute[239456]: 2026-01-29 17:18:42.484 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:42 np0005601226 nova_compute[239456]: 2026-01-29 17:18:42.488 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]: {
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:    "0": [
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:        {
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "devices": [
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "/dev/loop3"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            ],
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_name": "ceph_lv0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_size": "21470642176",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "name": "ceph_lv0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "tags": {
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cluster_name": "ceph",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.crush_device_class": "",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.encrypted": "0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.objectstore": "bluestore",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osd_id": "0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.type": "block",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.vdo": "0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.with_tpm": "0"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            },
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "type": "block",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "vg_name": "ceph_vg0"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:        }
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:    ],
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:    "1": [
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:        {
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "devices": [
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "/dev/loop4"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            ],
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_name": "ceph_lv1",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_size": "21470642176",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "name": "ceph_lv1",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "tags": {
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cluster_name": "ceph",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.crush_device_class": "",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.encrypted": "0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.objectstore": "bluestore",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osd_id": "1",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.type": "block",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.vdo": "0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.with_tpm": "0"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            },
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "type": "block",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "vg_name": "ceph_vg1"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:        }
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:    ],
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:    "2": [
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:        {
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "devices": [
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "/dev/loop5"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            ],
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_name": "ceph_lv2",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_size": "21470642176",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "name": "ceph_lv2",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "tags": {
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.cluster_name": "ceph",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.crush_device_class": "",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.encrypted": "0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.objectstore": "bluestore",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osd_id": "2",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.type": "block",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.vdo": "0",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:                "ceph.with_tpm": "0"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            },
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "type": "block",
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:            "vg_name": "ceph_vg2"
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:        }
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]:    ]
Jan 29 12:18:42 np0005601226 ecstatic_joliot[248954]: }
Jan 29 12:18:42 np0005601226 systemd[1]: libpod-635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c.scope: Deactivated successfully.
Jan 29 12:18:42 np0005601226 podman[248918]: 2026-01-29 17:18:42.667968278 +0000 UTC m=+1.277728665 container attach 635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_joliot, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:18:42 np0005601226 podman[248918]: 2026-01-29 17:18:42.669240343 +0000 UTC m=+1.279000730 container died 635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_joliot, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:18:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:18:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097514100' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.213 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.725s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.215 239460 DEBUG nova.virt.libvirt.vif [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1398955748',display_name='tempest-VolumesActionsTest-instance-1398955748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1398955748',id=3,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36b7f0db63d84c34b521603b194a3d9b',ramdisk_id='',reservation_id='r-ac7fkuyx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-927778289',owner_user_name='tempest-VolumesActionsTest-927778289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:18:36Z,user_data=None,user_id='814a809cf2434fc5bdc86a907c6f923d',uuid=e70499b1-fe73-43d6-b879-f6e0ab20b701,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.215 239460 DEBUG nova.network.os_vif_util [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converting VIF {"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.215 239460 DEBUG nova.network.os_vif_util [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:1e:79,bridge_name='br-int',has_traffic_filtering=True,id=093f0958-8f1b-4067-8692-210a7328406c,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap093f0958-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.216 239460 DEBUG nova.objects.instance [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lazy-loading 'pci_devices' on Instance uuid e70499b1-fe73-43d6-b879-f6e0ab20b701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.233 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <uuid>e70499b1-fe73-43d6-b879-f6e0ab20b701</uuid>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <name>instance-00000003</name>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesActionsTest-instance-1398955748</nova:name>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:18:41</nova:creationTime>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:user uuid="814a809cf2434fc5bdc86a907c6f923d">tempest-VolumesActionsTest-927778289-project-member</nova:user>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:project uuid="36b7f0db63d84c34b521603b194a3d9b">tempest-VolumesActionsTest-927778289</nova:project>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <nova:port uuid="093f0958-8f1b-4067-8692-210a7328406c">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <entry name="serial">e70499b1-fe73-43d6-b879-f6e0ab20b701</entry>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <entry name="uuid">e70499b1-fe73-43d6-b879-f6e0ab20b701</entry>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/e70499b1-fe73-43d6-b879-f6e0ab20b701_disk">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/e70499b1-fe73-43d6-b879-f6e0ab20b701_disk.config">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:a6:1e:79"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <target dev="tap093f0958-8f"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/console.log" append="off"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:18:43 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:18:43 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:18:43 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:18:43 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.234 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Preparing to wait for external event network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.235 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.235 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.235 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.236 239460 DEBUG nova.virt.libvirt.vif [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1398955748',display_name='tempest-VolumesActionsTest-instance-1398955748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1398955748',id=3,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36b7f0db63d84c34b521603b194a3d9b',ramdisk_id='',reservation_id='r-ac7fkuyx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-927778289',owner_user_name='tempest-VolumesActionsTest-927778289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:18:36Z,user_data=None,user_id='814a809cf2434fc5bdc86a907c6f923d',uuid=e70499b1-fe73-43d6-b879-f6e0ab20b701,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.236 239460 DEBUG nova.network.os_vif_util [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converting VIF {"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.237 239460 DEBUG nova.network.os_vif_util [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:1e:79,bridge_name='br-int',has_traffic_filtering=True,id=093f0958-8f1b-4067-8692-210a7328406c,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap093f0958-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.238 239460 DEBUG os_vif [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:1e:79,bridge_name='br-int',has_traffic_filtering=True,id=093f0958-8f1b-4067-8692-210a7328406c,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap093f0958-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.239 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cb2677e5afba56707626164fe56a3cb316ec20714d87ba6a2d95b20cb2660b9e-merged.mount: Deactivated successfully.
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.240 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.241 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.246 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.246 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap093f0958-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.246 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap093f0958-8f, col_values=(('external_ids', {'iface-id': '093f0958-8f1b-4067-8692-210a7328406c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:1e:79', 'vm-uuid': 'e70499b1-fe73-43d6-b879-f6e0ab20b701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.319 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:43 np0005601226 NetworkManager[49020]: <info>  [1769707123.3205] manager: (tap093f0958-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.322 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.324 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.325 239460 INFO os_vif [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:1e:79,bridge_name='br-int',has_traffic_filtering=True,id=093f0958-8f1b-4067-8692-210a7328406c,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap093f0958-8f')#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.535 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.536 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.536 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] No VIF found with MAC fa:16:3e:a6:1e:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.536 239460 INFO nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Using config drive#033[00m
Jan 29 12:18:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 199 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.4 MiB/s wr, 67 op/s
Jan 29 12:18:43 np0005601226 nova_compute[239456]: 2026-01-29 17:18:43.738 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:43 np0005601226 podman[248918]: 2026-01-29 17:18:43.918396797 +0000 UTC m=+2.528157164 container remove 635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_joliot, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:18:43 np0005601226 systemd[1]: libpod-conmon-635ec7864ab0499703244bc7a090ee6a88d7dc7859e7cc38ddafb3f5737c515c.scope: Deactivated successfully.
Jan 29 12:18:44 np0005601226 nova_compute[239456]: 2026-01-29 17:18:44.274 239460 INFO nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Creating config drive at /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/disk.config#033[00m
Jan 29 12:18:44 np0005601226 nova_compute[239456]: 2026-01-29 17:18:44.278 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqaub8qcu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:44 np0005601226 podman[249120]: 2026-01-29 17:18:44.259022789 +0000 UTC m=+0.019364310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:18:44 np0005601226 nova_compute[239456]: 2026-01-29 17:18:44.395 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqaub8qcu" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:44 np0005601226 nova_compute[239456]: 2026-01-29 17:18:44.412 239460 DEBUG nova.storage.rbd_utils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image e70499b1-fe73-43d6-b879-f6e0ab20b701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:44 np0005601226 nova_compute[239456]: 2026-01-29 17:18:44.415 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/disk.config e70499b1-fe73-43d6-b879-f6e0ab20b701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:44 np0005601226 podman[249120]: 2026-01-29 17:18:44.945134794 +0000 UTC m=+0.705476315 container create d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:18:45 np0005601226 systemd[1]: Started libpod-conmon-d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81.scope.
Jan 29 12:18:45 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:45 np0005601226 podman[249120]: 2026-01-29 17:18:45.577680708 +0000 UTC m=+1.338022239 container init d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:18:45 np0005601226 podman[249120]: 2026-01-29 17:18:45.583052465 +0000 UTC m=+1.343393966 container start d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:18:45 np0005601226 festive_goldstine[249172]: 167 167
Jan 29 12:18:45 np0005601226 systemd[1]: libpod-d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81.scope: Deactivated successfully.
Jan 29 12:18:45 np0005601226 conmon[249172]: conmon d6295de1bb7916bf59ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81.scope/container/memory.events
Jan 29 12:18:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 213 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 293 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Jan 29 12:18:45 np0005601226 podman[249120]: 2026-01-29 17:18:45.664041204 +0000 UTC m=+1.424382715 container attach d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:18:45 np0005601226 podman[249120]: 2026-01-29 17:18:45.664839566 +0000 UTC m=+1.425181067 container died d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:18:45 np0005601226 nova_compute[239456]: 2026-01-29 17:18:45.890 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:46 np0005601226 systemd[1]: var-lib-containers-storage-overlay-083d81912a84b134e0d55e2705d931c9dfe56034c91c84bfad7b265370f41006-merged.mount: Deactivated successfully.
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.590 239460 DEBUG oslo_concurrency.processutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/disk.config e70499b1-fe73-43d6-b879-f6e0ab20b701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.590 239460 INFO nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Deleting local config drive /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701/disk.config because it was imported into RBD.#033[00m
Jan 29 12:18:46 np0005601226 kernel: tap093f0958-8f: entered promiscuous mode
Jan 29 12:18:46 np0005601226 NetworkManager[49020]: <info>  [1769707126.6231] manager: (tap093f0958-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 29 12:18:46 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:46Z|00043|binding|INFO|Claiming lport 093f0958-8f1b-4067-8692-210a7328406c for this chassis.
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.625 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:46 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:46Z|00044|binding|INFO|093f0958-8f1b-4067-8692-210a7328406c: Claiming fa:16:3e:a6:1e:79 10.100.0.9
Jan 29 12:18:46 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:46Z|00045|binding|INFO|Setting lport 093f0958-8f1b-4067-8692-210a7328406c ovn-installed in OVS
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.631 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.633 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:46 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:46Z|00046|binding|INFO|Setting lport 093f0958-8f1b-4067-8692-210a7328406c up in Southbound
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.636 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:1e:79 10.100.0.9'], port_security=['fa:16:3e:a6:1e:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e70499b1-fe73-43d6-b879-f6e0ab20b701', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894c211c-3e65-4d00-831b-021ae0267115', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36b7f0db63d84c34b521603b194a3d9b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66c5d1d7-dfe8-4ff3-b9ba-ea8b4f693602', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0a118e9-f2d1-494c-89ac-62b6957c48ed, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=093f0958-8f1b-4067-8692-210a7328406c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.637 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 093f0958-8f1b-4067-8692-210a7328406c in datapath 894c211c-3e65-4d00-831b-021ae0267115 bound to our chassis#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.640 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894c211c-3e65-4d00-831b-021ae0267115#033[00m
Jan 29 12:18:46 np0005601226 systemd-machined[207561]: New machine qemu-3-instance-00000003.
Jan 29 12:18:46 np0005601226 systemd-udevd[249206]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.653 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[de55d9f0-41e0-4930-bb9c-7c679437c225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.654 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap894c211c-31 in ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.655 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap894c211c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.655 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a64e1aa0-15b6-4833-b4ff-6eb6dbee09a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.656 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6560b1-1cc1-4f14-80bf-462dc1516109]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 29 12:18:46 np0005601226 NetworkManager[49020]: <info>  [1769707126.6655] device (tap093f0958-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:18:46 np0005601226 NetworkManager[49020]: <info>  [1769707126.6661] device (tap093f0958-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.682 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[710553ab-d0af-41de-bb74-48bf5dfa5d5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.703 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5093474b-9860-4521-901a-caf1e321396a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 podman[249120]: 2026-01-29 17:18:46.710862559 +0000 UTC m=+2.471204060 container remove d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.725 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[63c6bd3c-5924-4219-aaf8-6699f28ccd07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.731 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ad36c38b-05e9-47a5-8c14-1e734fc84589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 NetworkManager[49020]: <info>  [1769707126.7325] manager: (tap894c211c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 29 12:18:46 np0005601226 systemd-udevd[249209]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:18:46 np0005601226 systemd[1]: libpod-conmon-d6295de1bb7916bf59ce93b11539c19b78256a7d8b82f9cb218fa2cbf8006e81.scope: Deactivated successfully.
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.760 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1d251d-e8a0-4935-8471-4c3c06642173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.763 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[188e78f1-64ad-441d-bf93-b2be1faf9ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 NetworkManager[49020]: <info>  [1769707126.7776] device (tap894c211c-30): carrier: link connected
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.780 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[39d15c51-98cb-4b38-8146-89d23bb41b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.793 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6adb2494-2081-4a4b-bf30-f66576df445a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894c211c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8f:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448028, 'reachable_time': 36382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249240, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.802 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c72c9f-bf04-4259-9df2-62b11ba8544d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:8f16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448028, 'tstamp': 448028}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249242, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.816 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2442a94f-6f02-4e17-beaf-fc67be0f0d89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894c211c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8f:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448028, 'reachable_time': 36382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249243, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.836 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[37459135-b686-4890-9036-8b8bdd156e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.893 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f1998fcc-16f7-417a-824a-c3424df8943b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.894 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894c211c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.895 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.895 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894c211c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:46 np0005601226 NetworkManager[49020]: <info>  [1769707126.8973] manager: (tap894c211c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.897 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:46 np0005601226 kernel: tap894c211c-30: entered promiscuous mode
Jan 29 12:18:46 np0005601226 podman[249251]: 2026-01-29 17:18:46.901672674 +0000 UTC m=+0.063324429 container create 0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hugle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.902 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894c211c-30, col_values=(('external_ids', {'iface-id': '1883e985-6845-4407-9b73-3530d9391c43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.903 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:46 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:46Z|00047|binding|INFO|Releasing lport 1883e985-6845-4407-9b73-3530d9391c43 from this chassis (sb_readonly=0)
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.904 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/894c211c-3e65-4d00-831b-021ae0267115.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/894c211c-3e65-4d00-831b-021ae0267115.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.905 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[cf36e88c-970e-49c0-a610-ba7a7ee68307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.907 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-894c211c-3e65-4d00-831b-021ae0267115
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/894c211c-3e65-4d00-831b-021ae0267115.pid.haproxy
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 894c211c-3e65-4d00-831b-021ae0267115
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:18:46 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:46.908 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'env', 'PROCESS_TAG=haproxy-894c211c-3e65-4d00-831b-021ae0267115', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/894c211c-3e65-4d00-831b-021ae0267115.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:18:46 np0005601226 nova_compute[239456]: 2026-01-29 17:18:46.908 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:46 np0005601226 podman[249251]: 2026-01-29 17:18:46.856523272 +0000 UTC m=+0.018175057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:18:46 np0005601226 systemd[1]: Started libpod-conmon-0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc.scope.
Jan 29 12:18:46 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25d74c99ebe0fda6cd900ef5e9c06a4c9da92ad41c9b058ca318217a6e15a03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25d74c99ebe0fda6cd900ef5e9c06a4c9da92ad41c9b058ca318217a6e15a03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25d74c99ebe0fda6cd900ef5e9c06a4c9da92ad41c9b058ca318217a6e15a03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25d74c99ebe0fda6cd900ef5e9c06a4c9da92ad41c9b058ca318217a6e15a03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:47 np0005601226 podman[249251]: 2026-01-29 17:18:47.267299448 +0000 UTC m=+0.428951253 container init 0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:18:47 np0005601226 podman[249251]: 2026-01-29 17:18:47.275919983 +0000 UTC m=+0.437571748 container start 0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:18:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:47 np0005601226 podman[249251]: 2026-01-29 17:18:47.386613302 +0000 UTC m=+0.548265097 container attach 0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.420 239460 DEBUG nova.compute.manager [req-ad895666-e68a-4f42-b518-776cabf46e29 req-ff1f6831-24b8-4cc5-9415-49652e0fbdb1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received event network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.421 239460 DEBUG oslo_concurrency.lockutils [req-ad895666-e68a-4f42-b518-776cabf46e29 req-ff1f6831-24b8-4cc5-9415-49652e0fbdb1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.421 239460 DEBUG oslo_concurrency.lockutils [req-ad895666-e68a-4f42-b518-776cabf46e29 req-ff1f6831-24b8-4cc5-9415-49652e0fbdb1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.422 239460 DEBUG oslo_concurrency.lockutils [req-ad895666-e68a-4f42-b518-776cabf46e29 req-ff1f6831-24b8-4cc5-9415-49652e0fbdb1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.422 239460 DEBUG nova.compute.manager [req-ad895666-e68a-4f42-b518-776cabf46e29 req-ff1f6831-24b8-4cc5-9415-49652e0fbdb1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Processing event network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.498 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.499 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707127.4982603, e70499b1-fe73-43d6-b879-f6e0ab20b701 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.499 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] VM Started (Lifecycle Event)#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.502 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.504 239460 INFO nova.virt.libvirt.driver [-] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Instance spawned successfully.#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.505 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.526 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.531 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.535 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.536 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.536 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.537 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.537 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.538 239460 DEBUG nova.virt.libvirt.driver [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.559 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.559 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707127.4984868, e70499b1-fe73-43d6-b879-f6e0ab20b701 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.560 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:18:47 np0005601226 podman[249341]: 2026-01-29 17:18:47.467263772 +0000 UTC m=+0.022785433 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.586 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.589 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707127.5012739, e70499b1-fe73-43d6-b879-f6e0ab20b701 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.589 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.596 239460 INFO nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Took 10.98 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.596 239460 DEBUG nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.604 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.607 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.627 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:18:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 213 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.653 239460 INFO nova.compute.manager [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Took 12.16 seconds to build instance.#033[00m
Jan 29 12:18:47 np0005601226 nova_compute[239456]: 2026-01-29 17:18:47.669 239460 DEBUG oslo_concurrency.lockutils [None req-2b3dc2fc-8672-4824-b982-b167bbf70b4e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:47 np0005601226 podman[249341]: 2026-01-29 17:18:47.694521001 +0000 UTC m=+0.250042632 container create 7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:18:47 np0005601226 systemd[1]: Started libpod-conmon-7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed.scope.
Jan 29 12:18:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:18:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b1a7ff5dfb932affcb128a164f63d1ad8399526d2ca60d56858c264fe114b7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:18:47 np0005601226 podman[249341]: 2026-01-29 17:18:47.860311273 +0000 UTC m=+0.415832904 container init 7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:18:47 np0005601226 podman[249341]: 2026-01-29 17:18:47.865457564 +0000 UTC m=+0.420979195 container start 7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:18:47 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [NOTICE]   (249427) : New worker (249429) forked
Jan 29 12:18:47 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [NOTICE]   (249427) : Loading success.
Jan 29 12:18:47 np0005601226 lvm[249442]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:18:47 np0005601226 lvm[249442]: VG ceph_vg0 finished
Jan 29 12:18:47 np0005601226 lvm[249444]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:18:47 np0005601226 lvm[249444]: VG ceph_vg1 finished
Jan 29 12:18:47 np0005601226 lvm[249445]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:18:47 np0005601226 lvm[249445]: VG ceph_vg2 finished
Jan 29 12:18:48 np0005601226 vigorous_hugle[249274]: {}
Jan 29 12:18:48 np0005601226 systemd[1]: libpod-0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc.scope: Deactivated successfully.
Jan 29 12:18:48 np0005601226 podman[249251]: 2026-01-29 17:18:48.135262974 +0000 UTC m=+1.296914739 container died 0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:18:48 np0005601226 nova_compute[239456]: 2026-01-29 17:18:48.319 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a25d74c99ebe0fda6cd900ef5e9c06a4c9da92ad41c9b058ca318217a6e15a03-merged.mount: Deactivated successfully.
Jan 29 12:18:48 np0005601226 podman[249251]: 2026-01-29 17:18:48.53874728 +0000 UTC m=+1.700399035 container remove 0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:18:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:18:48 np0005601226 systemd[1]: libpod-conmon-0dbd724f19b17bf863abac00437484a2ca514ca765649f2fde30e4d9947438fc.scope: Deactivated successfully.
Jan 29 12:18:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:18:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:18:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:18:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:18:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:18:49 np0005601226 nova_compute[239456]: 2026-01-29 17:18:49.482 239460 DEBUG nova.compute.manager [req-4062e9bc-8d32-4581-828b-fe8eef9bb3f4 req-4034f533-a9f7-4767-b41e-dbb715cab3a1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received event network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:49 np0005601226 nova_compute[239456]: 2026-01-29 17:18:49.483 239460 DEBUG oslo_concurrency.lockutils [req-4062e9bc-8d32-4581-828b-fe8eef9bb3f4 req-4034f533-a9f7-4767-b41e-dbb715cab3a1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:49 np0005601226 nova_compute[239456]: 2026-01-29 17:18:49.483 239460 DEBUG oslo_concurrency.lockutils [req-4062e9bc-8d32-4581-828b-fe8eef9bb3f4 req-4034f533-a9f7-4767-b41e-dbb715cab3a1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:49 np0005601226 nova_compute[239456]: 2026-01-29 17:18:49.483 239460 DEBUG oslo_concurrency.lockutils [req-4062e9bc-8d32-4581-828b-fe8eef9bb3f4 req-4034f533-a9f7-4767-b41e-dbb715cab3a1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:49 np0005601226 nova_compute[239456]: 2026-01-29 17:18:49.483 239460 DEBUG nova.compute.manager [req-4062e9bc-8d32-4581-828b-fe8eef9bb3f4 req-4034f533-a9f7-4767-b41e-dbb715cab3a1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] No waiting events found dispatching network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:18:49 np0005601226 nova_compute[239456]: 2026-01-29 17:18:49.483 239460 WARNING nova.compute.manager [req-4062e9bc-8d32-4581-828b-fe8eef9bb3f4 req-4034f533-a9f7-4767-b41e-dbb715cab3a1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received unexpected event network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c for instance with vm_state active and task_state None.#033[00m
Jan 29 12:18:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 213 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 848 KiB/s rd, 1.9 MiB/s wr, 77 op/s
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.662 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.662 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.662 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.663 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.663 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.664 239460 INFO nova.compute.manager [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Terminating instance#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.665 239460 DEBUG nova.compute.manager [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.893 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:50 np0005601226 kernel: tap093f0958-8f (unregistering): left promiscuous mode
Jan 29 12:18:50 np0005601226 NetworkManager[49020]: <info>  [1769707130.9229] device (tap093f0958-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.934 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:50Z|00048|binding|INFO|Releasing lport 093f0958-8f1b-4067-8692-210a7328406c from this chassis (sb_readonly=0)
Jan 29 12:18:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:50Z|00049|binding|INFO|Setting lport 093f0958-8f1b-4067-8692-210a7328406c down in Southbound
Jan 29 12:18:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:18:50Z|00050|binding|INFO|Removing iface tap093f0958-8f ovn-installed in OVS
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.937 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:50 np0005601226 nova_compute[239456]: 2026-01-29 17:18:50.944 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:50.944 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:1e:79 10.100.0.9'], port_security=['fa:16:3e:a6:1e:79 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e70499b1-fe73-43d6-b879-f6e0ab20b701', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894c211c-3e65-4d00-831b-021ae0267115', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36b7f0db63d84c34b521603b194a3d9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66c5d1d7-dfe8-4ff3-b9ba-ea8b4f693602', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0a118e9-f2d1-494c-89ac-62b6957c48ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=093f0958-8f1b-4067-8692-210a7328406c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:18:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:50.949 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 093f0958-8f1b-4067-8692-210a7328406c in datapath 894c211c-3e65-4d00-831b-021ae0267115 unbound from our chassis#033[00m
Jan 29 12:18:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:50.953 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 894c211c-3e65-4d00-831b-021ae0267115, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:18:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:50.954 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[791b437a-4a5b-418c-be23-da412ad3c04b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:50.956 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 namespace which is not needed anymore#033[00m
Jan 29 12:18:50 np0005601226 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 29 12:18:50 np0005601226 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 3.739s CPU time.
Jan 29 12:18:50 np0005601226 systemd-machined[207561]: Machine qemu-3-instance-00000003 terminated.
Jan 29 12:18:51 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [NOTICE]   (249427) : haproxy version is 2.8.14-c23fe91
Jan 29 12:18:51 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [NOTICE]   (249427) : path to executable is /usr/sbin/haproxy
Jan 29 12:18:51 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [WARNING]  (249427) : Exiting Master process...
Jan 29 12:18:51 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [WARNING]  (249427) : Exiting Master process...
Jan 29 12:18:51 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [ALERT]    (249427) : Current worker (249429) exited with code 143 (Terminated)
Jan 29 12:18:51 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[249418]: [WARNING]  (249427) : All workers exited. Exiting... (0)
Jan 29 12:18:51 np0005601226 systemd[1]: libpod-7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed.scope: Deactivated successfully.
Jan 29 12:18:51 np0005601226 podman[249510]: 2026-01-29 17:18:51.085283034 +0000 UTC m=+0.057468398 container died 7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.086 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.090 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.097 239460 INFO nova.virt.libvirt.driver [-] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Instance destroyed successfully.#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.098 239460 DEBUG nova.objects.instance [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lazy-loading 'resources' on Instance uuid e70499b1-fe73-43d6-b879-f6e0ab20b701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.113 239460 DEBUG nova.virt.libvirt.vif [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:18:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1398955748',display_name='tempest-VolumesActionsTest-instance-1398955748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1398955748',id=3,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:18:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36b7f0db63d84c34b521603b194a3d9b',ramdisk_id='',reservation_id='r-ac7fkuyx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-927778289',owner_user_name='tempest-VolumesActionsTest-927778289-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:18:47Z,user_data=None,user_id='814a809cf2434fc5bdc86a907c6f923d',uuid=e70499b1-fe73-43d6-b879-f6e0ab20b701,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.114 239460 DEBUG nova.network.os_vif_util [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converting VIF {"id": "093f0958-8f1b-4067-8692-210a7328406c", "address": "fa:16:3e:a6:1e:79", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap093f0958-8f", "ovs_interfaceid": "093f0958-8f1b-4067-8692-210a7328406c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.114 239460 DEBUG nova.network.os_vif_util [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:1e:79,bridge_name='br-int',has_traffic_filtering=True,id=093f0958-8f1b-4067-8692-210a7328406c,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap093f0958-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.115 239460 DEBUG os_vif [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:1e:79,bridge_name='br-int',has_traffic_filtering=True,id=093f0958-8f1b-4067-8692-210a7328406c,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap093f0958-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.116 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.116 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap093f0958-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.118 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.119 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.121 239460 INFO os_vif [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:1e:79,bridge_name='br-int',has_traffic_filtering=True,id=093f0958-8f1b-4067-8692-210a7328406c,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap093f0958-8f')#033[00m
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110573957090135 of space, bias 1.0, pg target 0.331721871270405 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035150471691229216 of space, bias 1.0, pg target 0.10545141507368765 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2094033543017333e-07 of space, bias 1.0, pg target 3.6282100629051996e-05 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665932697748978 of space, bias 1.0, pg target 0.1997798093246934 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1948408338096277e-06 of space, bias 4.0, pg target 0.0014338090005715533 quantized to 16 (current 16)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:18:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed-userdata-shm.mount: Deactivated successfully.
Jan 29 12:18:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1b1a7ff5dfb932affcb128a164f63d1ad8399526d2ca60d56858c264fe114b7c-merged.mount: Deactivated successfully.
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.589 239460 DEBUG nova.compute.manager [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received event network-vif-unplugged-093f0958-8f1b-4067-8692-210a7328406c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.589 239460 DEBUG oslo_concurrency.lockutils [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.589 239460 DEBUG oslo_concurrency.lockutils [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.589 239460 DEBUG oslo_concurrency.lockutils [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.589 239460 DEBUG nova.compute.manager [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] No waiting events found dispatching network-vif-unplugged-093f0958-8f1b-4067-8692-210a7328406c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.590 239460 DEBUG nova.compute.manager [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received event network-vif-unplugged-093f0958-8f1b-4067-8692-210a7328406c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.590 239460 DEBUG nova.compute.manager [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received event network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.590 239460 DEBUG oslo_concurrency.lockutils [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.590 239460 DEBUG oslo_concurrency.lockutils [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.590 239460 DEBUG oslo_concurrency.lockutils [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.590 239460 DEBUG nova.compute.manager [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] No waiting events found dispatching network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.590 239460 WARNING nova.compute.manager [req-e2217734-1abf-427e-b04e-dcff45e9c865 req-35e84d31-e46f-4dfe-aa90-697853853f36 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received unexpected event network-vif-plugged-093f0958-8f1b-4067-8692-210a7328406c for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:18:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 213 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 29 12:18:51 np0005601226 podman[249510]: 2026-01-29 17:18:51.678708481 +0000 UTC m=+0.650893825 container cleanup 7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 29 12:18:51 np0005601226 systemd[1]: libpod-conmon-7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed.scope: Deactivated successfully.
Jan 29 12:18:51 np0005601226 podman[249569]: 2026-01-29 17:18:51.907687248 +0000 UTC m=+0.215595513 container remove 7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.912 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[391e63bf-7c8e-4358-8947-aa7e9061e647]: (4, ('Thu Jan 29 05:18:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 (7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed)\n7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed\nThu Jan 29 05:18:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 (7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed)\n7e48c9045ffa499e7ca57be97b937a3812e1f42a674dd28489d19414a3e6a1ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.914 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[519d9685-6f35-411a-ac93-5be616274915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.915 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894c211c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.917 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:51 np0005601226 kernel: tap894c211c-30: left promiscuous mode
Jan 29 12:18:51 np0005601226 nova_compute[239456]: 2026-01-29 17:18:51.927 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.931 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd405d1-7879-4b1b-8be6-4090963dcd02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.946 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7bad1c3c-3587-420f-8e57-b053347a3585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.948 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0ae504-b956-4c4d-8996-d8c984785070]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.960 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2dba7e-ec33-42f8-9e9c-0f4267df49c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448022, 'reachable_time': 22903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249584, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.962 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:18:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:51.962 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd1ebcc-69e8-44a4-aeac-822be1d20370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:51 np0005601226 systemd[1]: run-netns-ovnmeta\x2d894c211c\x2d3e65\x2d4d00\x2d831b\x2d021ae0267115.mount: Deactivated successfully.
Jan 29 12:18:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:52 np0005601226 nova_compute[239456]: 2026-01-29 17:18:52.456 239460 INFO nova.virt.libvirt.driver [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Deleting instance files /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701_del#033[00m
Jan 29 12:18:52 np0005601226 nova_compute[239456]: 2026-01-29 17:18:52.457 239460 INFO nova.virt.libvirt.driver [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Deletion of /var/lib/nova/instances/e70499b1-fe73-43d6-b879-f6e0ab20b701_del complete#033[00m
Jan 29 12:18:52 np0005601226 nova_compute[239456]: 2026-01-29 17:18:52.515 239460 INFO nova.compute.manager [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Took 1.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:18:52 np0005601226 nova_compute[239456]: 2026-01-29 17:18:52.516 239460 DEBUG oslo.service.loopingcall [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:18:52 np0005601226 nova_compute[239456]: 2026-01-29 17:18:52.516 239460 DEBUG nova.compute.manager [-] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:18:52 np0005601226 nova_compute[239456]: 2026-01-29 17:18:52.517 239460 DEBUG nova.network.neutron [-] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:18:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:53.092 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:18:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:53.093 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.097 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.108 239460 DEBUG nova.network.neutron [-] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.123 239460 INFO nova.compute.manager [-] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Took 0.61 seconds to deallocate network for instance.#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.160 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.160 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.235 239460 DEBUG oslo_concurrency.processutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.522 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.523 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.540 239460 DEBUG nova.objects.instance [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'flavor' on Instance uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.581 239460 INFO nova.virt.libvirt.driver [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Ignoring supplied device name: /dev/vdb#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.601 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 197 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 104 op/s
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.654 239460 DEBUG nova.compute.manager [req-9d2554fd-2ea0-452c-a50f-104146379d1b req-be1fe808-15db-4072-bff3-6624f9e9638a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Received event network-vif-deleted-093f0958-8f1b-4067-8692-210a7328406c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:18:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:18:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2890820094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.804 239460 DEBUG oslo_concurrency.processutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.806 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.807 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.807 239460 INFO nova.compute.manager [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Attaching volume e6227d89-43b2-41c2-989b-8b285344bda8 to /dev/vdb#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.815 239460 DEBUG nova.compute.provider_tree [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.831 239460 DEBUG nova.scheduler.client.report [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.898 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.926 239460 INFO nova.scheduler.client.report [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Deleted allocations for instance e70499b1-fe73-43d6-b879-f6e0ab20b701#033[00m
Jan 29 12:18:53 np0005601226 nova_compute[239456]: 2026-01-29 17:18:53.984 239460 DEBUG oslo_concurrency.lockutils [None req-d8c7d057-d42c-4aa8-b669-22fb14eb61f1 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "e70499b1-fe73-43d6-b879-f6e0ab20b701" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.011 239460 DEBUG os_brick.utils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.012 239460 INFO oslo.privsep.daemon [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpwuro1hn7/privsep.sock']#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.671 239460 INFO oslo.privsep.daemon [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.550 249612 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.554 249612 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.556 249612 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.556 249612 INFO oslo.privsep.daemon [-] privsep daemon running as pid 249612#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.675 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[75908386-a86a-48be-8bfa-608d19aebb03]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.771 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.786 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.786 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[9aed1cee-a23f-4475-9cbe-84e59ab5744a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.788 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.793 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.793 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb16b9a-5a0c-4468-852a-9d44e39634f6]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.795 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.803 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.803 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6ae90e-4e37-4c7d-80c2-b4c58663fb01]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.805 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[5f818c2f-056b-4104-819e-bfd91ddebadc]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.805 239460 DEBUG oslo_concurrency.processutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.823 239460 DEBUG oslo_concurrency.processutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.826 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.827 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.827 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.827 239460 DEBUG os_brick.utils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] <== get_connector_properties: return (816ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:18:54 np0005601226 nova_compute[239456]: 2026-01-29 17:18:54.828 239460 DEBUG nova.virt.block_device [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updating existing volume attachment record: 028aa045-280a-410e-a716-667d37da6476 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:18:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:18:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4186051673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.576 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.577 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.578 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.585 239460 DEBUG nova.objects.instance [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'flavor' on Instance uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.611 239460 DEBUG nova.virt.libvirt.driver [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Attempting to attach volume e6227d89-43b2-41c2-989b-8b285344bda8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.614 239460 DEBUG nova.virt.libvirt.guest [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:18:55 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:18:55 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-e6227d89-43b2-41c2-989b-8b285344bda8">
Jan 29 12:18:55 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:18:55 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:18:55 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:18:55 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:18:55 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:18:55 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:18:55 np0005601226 nova_compute[239456]:  <serial>e6227d89-43b2-41c2-989b-8b285344bda8</serial>
Jan 29 12:18:55 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:18:55 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:18:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 491 KiB/s wr, 108 op/s
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.711 239460 DEBUG nova.virt.libvirt.driver [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.711 239460 DEBUG nova.virt.libvirt.driver [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.712 239460 DEBUG nova.virt.libvirt.driver [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.712 239460 DEBUG nova.virt.libvirt.driver [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No VIF found with MAC fa:16:3e:d5:ce:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.894 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:55 np0005601226 nova_compute[239456]: 2026-01-29 17:18:55.900 239460 DEBUG oslo_concurrency.lockutils [None req-b8c89af1-8d94-40bb-8f7a-ae73110fb2c2 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:56 np0005601226 nova_compute[239456]: 2026-01-29 17:18:56.149 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:18:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 29 12:18:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 29 12:18:57 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 29 12:18:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:18:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 121 op/s
Jan 29 12:18:57 np0005601226 nova_compute[239456]: 2026-01-29 17:18:57.849 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:57 np0005601226 nova_compute[239456]: 2026-01-29 17:18:57.850 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:57 np0005601226 nova_compute[239456]: 2026-01-29 17:18:57.866 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:18:57 np0005601226 nova_compute[239456]: 2026-01-29 17:18:57.929 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:57 np0005601226 nova_compute[239456]: 2026-01-29 17:18:57.930 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:57 np0005601226 nova_compute[239456]: 2026-01-29 17:18:57.936 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:18:57 np0005601226 nova_compute[239456]: 2026-01-29 17:18:57.936 239460 INFO nova.compute.claims [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:18:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.033 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 29 12:18:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 29 12:18:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:18:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616432457' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.772 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.738s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.776 239460 DEBUG nova.compute.provider_tree [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.792 239460 DEBUG nova.scheduler.client.report [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.811 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.813 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.873 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.874 239460 DEBUG nova.network.neutron [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.894 239460 INFO nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.917 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.994 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.995 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:18:58 np0005601226 nova_compute[239456]: 2026-01-29 17:18:58.996 239460 INFO nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Creating image(s)#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.060 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.081 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:59 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:18:59.095 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.101 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.104 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.120 239460 DEBUG nova.policy [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '814a809cf2434fc5bdc86a907c6f923d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36b7f0db63d84c34b521603b194a3d9b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.154 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.154 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.155 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.155 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.172 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.175 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d d4d33359-8cfc-4425-9ec5-362129170044_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:18:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 29 12:18:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 29 12:18:59 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.640 239460 DEBUG nova.network.neutron [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Successfully created port: 25bcb4c9-3633-4bb0-96e1-5749df2814c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:18:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 8.7 KiB/s wr, 62 op/s
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.737 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d d4d33359-8cfc-4425-9ec5-362129170044_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.796 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] resizing rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.926 239460 DEBUG nova.objects.instance [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lazy-loading 'migration_context' on Instance uuid d4d33359-8cfc-4425-9ec5-362129170044 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.949 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.949 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Ensure instance console log exists: /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.949 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.950 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:18:59 np0005601226 nova_compute[239456]: 2026-01-29 17:18:59.950 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.618 239460 DEBUG nova.network.neutron [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Successfully updated port: 25bcb4c9-3633-4bb0-96e1-5749df2814c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.631 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "refresh_cache-d4d33359-8cfc-4425-9ec5-362129170044" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.631 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquired lock "refresh_cache-d4d33359-8cfc-4425-9ec5-362129170044" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.632 239460 DEBUG nova.network.neutron [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.712 239460 DEBUG nova.compute.manager [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received event network-changed-25bcb4c9-3633-4bb0-96e1-5749df2814c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.712 239460 DEBUG nova.compute.manager [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Refreshing instance network info cache due to event network-changed-25bcb4c9-3633-4bb0-96e1-5749df2814c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.713 239460 DEBUG oslo_concurrency.lockutils [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-d4d33359-8cfc-4425-9ec5-362129170044" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.806 239460 DEBUG nova.network.neutron [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:19:00 np0005601226 nova_compute[239456]: 2026-01-29 17:19:00.896 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.151 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 29 12:19:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 29 12:19:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 29 12:19:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 183 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 958 KiB/s wr, 78 op/s
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.756 239460 DEBUG nova.network.neutron [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Updating instance_info_cache with network_info: [{"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.776 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Releasing lock "refresh_cache-d4d33359-8cfc-4425-9ec5-362129170044" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.776 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Instance network_info: |[{"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.777 239460 DEBUG oslo_concurrency.lockutils [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-d4d33359-8cfc-4425-9ec5-362129170044" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.777 239460 DEBUG nova.network.neutron [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Refreshing network info cache for port 25bcb4c9-3633-4bb0-96e1-5749df2814c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.780 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Start _get_guest_xml network_info=[{"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.784 239460 WARNING nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.788 239460 DEBUG nova.virt.libvirt.host [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.789 239460 DEBUG nova.virt.libvirt.host [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.791 239460 DEBUG nova.virt.libvirt.host [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.791 239460 DEBUG nova.virt.libvirt.host [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.792 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.792 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.793 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.793 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.793 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.793 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.794 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.794 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.794 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.794 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.795 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.795 239460 DEBUG nova.virt.hardware [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:19:01 np0005601226 nova_compute[239456]: 2026-01-29 17:19:01.798 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:19:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368362606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.327 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.359 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.362 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:19:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/48430046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.887 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.888 239460 DEBUG nova.virt.libvirt.vif [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:18:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2114964595',display_name='tempest-VolumesActionsTest-instance-2114964595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2114964595',id=4,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36b7f0db63d84c34b521603b194a3d9b',ramdisk_id='',reservation_id='r-0v09xzr3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-927778289',owner_user_name='tempest-VolumesActionsTest-927778289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:18:58Z,user_data=None,user_id='814a809cf2434fc5bdc86a907c6f923d',uuid=d4d33359-8cfc-4425-9ec5-362129170044,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.889 239460 DEBUG nova.network.os_vif_util [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converting VIF {"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.889 239460 DEBUG nova.network.os_vif_util [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:62:2a,bridge_name='br-int',has_traffic_filtering=True,id=25bcb4c9-3633-4bb0-96e1-5749df2814c3,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25bcb4c9-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.890 239460 DEBUG nova.objects.instance [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lazy-loading 'pci_devices' on Instance uuid d4d33359-8cfc-4425-9ec5-362129170044 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.907 239460 DEBUG nova.network.neutron [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Updated VIF entry in instance network info cache for port 25bcb4c9-3633-4bb0-96e1-5749df2814c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.908 239460 DEBUG nova.network.neutron [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Updating instance_info_cache with network_info: [{"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.913 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <uuid>d4d33359-8cfc-4425-9ec5-362129170044</uuid>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <name>instance-00000004</name>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesActionsTest-instance-2114964595</nova:name>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:19:01</nova:creationTime>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:user uuid="814a809cf2434fc5bdc86a907c6f923d">tempest-VolumesActionsTest-927778289-project-member</nova:user>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:project uuid="36b7f0db63d84c34b521603b194a3d9b">tempest-VolumesActionsTest-927778289</nova:project>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <nova:port uuid="25bcb4c9-3633-4bb0-96e1-5749df2814c3">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <entry name="serial">d4d33359-8cfc-4425-9ec5-362129170044</entry>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <entry name="uuid">d4d33359-8cfc-4425-9ec5-362129170044</entry>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/d4d33359-8cfc-4425-9ec5-362129170044_disk">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/d4d33359-8cfc-4425-9ec5-362129170044_disk.config">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:b3:62:2a"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <target dev="tap25bcb4c9-36"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/console.log" append="off"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:19:02 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:19:02 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:19:02 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:19:02 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.914 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Preparing to wait for external event network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.914 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.914 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.915 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.915 239460 DEBUG nova.virt.libvirt.vif [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:18:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2114964595',display_name='tempest-VolumesActionsTest-instance-2114964595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2114964595',id=4,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36b7f0db63d84c34b521603b194a3d9b',ramdisk_id='',reservation_id='r-0v09xzr3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-927778289',owner_user_name='tempest-VolumesActionsTest-927778289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:18:58Z,user_data=None,user_id='814a809cf2434fc5bdc86a907c6f923d',uuid=d4d33359-8cfc-4425-9ec5-362129170044,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.916 239460 DEBUG nova.network.os_vif_util [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converting VIF {"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.916 239460 DEBUG nova.network.os_vif_util [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:62:2a,bridge_name='br-int',has_traffic_filtering=True,id=25bcb4c9-3633-4bb0-96e1-5749df2814c3,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25bcb4c9-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.917 239460 DEBUG os_vif [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:62:2a,bridge_name='br-int',has_traffic_filtering=True,id=25bcb4c9-3633-4bb0-96e1-5749df2814c3,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25bcb4c9-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.917 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.918 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.918 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.921 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.921 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25bcb4c9-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.921 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25bcb4c9-36, col_values=(('external_ids', {'iface-id': '25bcb4c9-3633-4bb0-96e1-5749df2814c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:62:2a', 'vm-uuid': 'd4d33359-8cfc-4425-9ec5-362129170044'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:02 np0005601226 NetworkManager[49020]: <info>  [1769707142.9240] manager: (tap25bcb4c9-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.926 239460 DEBUG oslo_concurrency.lockutils [req-52a251ab-b080-4d65-930a-3cbcd63fe456 req-8ec3a316-535c-44d6-8095-b137dbcc8881 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-d4d33359-8cfc-4425-9ec5-362129170044" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.928 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:02 np0005601226 nova_compute[239456]: 2026-01-29 17:19:02.929 239460 INFO os_vif [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:62:2a,bridge_name='br-int',has_traffic_filtering=True,id=25bcb4c9-3633-4bb0-96e1-5749df2814c3,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25bcb4c9-36')#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.127 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.128 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.128 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] No VIF found with MAC fa:16:3e:b3:62:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.128 239460 INFO nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Using config drive#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.148 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.587 239460 INFO nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Creating config drive at /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/disk.config#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.590 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_o7z4tx5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 29 12:19:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 203 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Jan 29 12:19:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 29 12:19:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.710 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_o7z4tx5" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.764 239460 DEBUG nova.storage.rbd_utils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] rbd image d4d33359-8cfc-4425-9ec5-362129170044_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.767 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/disk.config d4d33359-8cfc-4425-9ec5-362129170044_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.938 239460 DEBUG oslo_concurrency.processutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/disk.config d4d33359-8cfc-4425-9ec5-362129170044_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.939 239460 INFO nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Deleting local config drive /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044/disk.config because it was imported into RBD.#033[00m
Jan 29 12:19:03 np0005601226 kernel: tap25bcb4c9-36: entered promiscuous mode
Jan 29 12:19:03 np0005601226 NetworkManager[49020]: <info>  [1769707143.9670] manager: (tap25bcb4c9-36): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 29 12:19:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:03Z|00051|binding|INFO|Claiming lport 25bcb4c9-3633-4bb0-96e1-5749df2814c3 for this chassis.
Jan 29 12:19:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:03Z|00052|binding|INFO|25bcb4c9-3633-4bb0-96e1-5749df2814c3: Claiming fa:16:3e:b3:62:2a 10.100.0.7
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.968 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:03Z|00053|binding|INFO|Setting lport 25bcb4c9-3633-4bb0-96e1-5749df2814c3 ovn-installed in OVS
Jan 29 12:19:03 np0005601226 nova_compute[239456]: 2026-01-29 17:19:03.977 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:03 np0005601226 systemd-udevd[249962]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:19:03 np0005601226 NetworkManager[49020]: <info>  [1769707143.9960] device (tap25bcb4c9-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:19:03 np0005601226 NetworkManager[49020]: <info>  [1769707143.9966] device (tap25bcb4c9-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:19:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:04Z|00054|binding|INFO|Setting lport 25bcb4c9-3633-4bb0-96e1-5749df2814c3 up in Southbound
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.077 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:62:2a 10.100.0.7'], port_security=['fa:16:3e:b3:62:2a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd4d33359-8cfc-4425-9ec5-362129170044', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894c211c-3e65-4d00-831b-021ae0267115', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36b7f0db63d84c34b521603b194a3d9b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66c5d1d7-dfe8-4ff3-b9ba-ea8b4f693602', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0a118e9-f2d1-494c-89ac-62b6957c48ed, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=25bcb4c9-3633-4bb0-96e1-5749df2814c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.078 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 25bcb4c9-3633-4bb0-96e1-5749df2814c3 in datapath 894c211c-3e65-4d00-831b-021ae0267115 bound to our chassis#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.080 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 894c211c-3e65-4d00-831b-021ae0267115#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.089 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac82c93-ec41-447f-8dcf-7aaa68e9b3ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.090 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap894c211c-31 in ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.091 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap894c211c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.091 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ace5fcd1-7f97-4e46-b3a5-e2e1f95cfee1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.092 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe5dbb0-ec59-4505-a201-1240768afdd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.098 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[eb01e9aa-4952-4e55-8611-06e7bf315e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.108 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fa43e45f-6247-4221-991b-7eb198a9dd9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 systemd-machined[207561]: New machine qemu-4-instance-00000004.
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.124 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbd7430-914b-434a-8524-0b7130b43266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.128 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6d568870-3521-4081-bbd7-2a1fbbc61846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 systemd-udevd[249964]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:19:04 np0005601226 NetworkManager[49020]: <info>  [1769707144.1291] manager: (tap894c211c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 29 12:19:04 np0005601226 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.149 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[7f40a582-571c-423a-a3a1-488010eb925a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.151 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[13b76147-e6d2-4899-b0ee-393bcb9c9d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 NetworkManager[49020]: <info>  [1769707144.1655] device (tap894c211c-30): carrier: link connected
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.167 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[9e06fd96-b638-4e1c-acdd-d65c37024134]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.180 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e5898786-3fc0-436d-9c9c-c896103d38e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894c211c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8f:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449766, 'reachable_time': 26859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249997, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.192 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[eab48cb0-d95d-454c-807c-443b406e3ecb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:8f16'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449766, 'tstamp': 449766}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249999, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.207 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[69282a05-8185-4221-a5ef-5da7d5f2e79a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap894c211c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8f:16'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449766, 'reachable_time': 26859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250000, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.231 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[83378519-912f-4024-93c2-4eab1da477b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.278 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[79331f74-4495-4cd1-8820-cd24393b3bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.279 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894c211c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.279 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.280 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap894c211c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.281 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:04 np0005601226 kernel: tap894c211c-30: entered promiscuous mode
Jan 29 12:19:04 np0005601226 NetworkManager[49020]: <info>  [1769707144.2822] manager: (tap894c211c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.284 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap894c211c-30, col_values=(('external_ids', {'iface-id': '1883e985-6845-4407-9b73-3530d9391c43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.285 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:04Z|00055|binding|INFO|Releasing lport 1883e985-6845-4407-9b73-3530d9391c43 from this chassis (sb_readonly=0)
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.287 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.288 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/894c211c-3e65-4d00-831b-021ae0267115.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/894c211c-3e65-4d00-831b-021ae0267115.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.289 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e2a7ec-859e-46fc-abe6-2bdd815f8b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.290 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-894c211c-3e65-4d00-831b-021ae0267115
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/894c211c-3e65-4d00-831b-021ae0267115.pid.haproxy
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 894c211c-3e65-4d00-831b-021ae0267115
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.291 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:04.291 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'env', 'PROCESS_TAG=haproxy-894c211c-3e65-4d00-831b-021ae0267115', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/894c211c-3e65-4d00-831b-021ae0267115.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.294 239460 DEBUG nova.compute.manager [req-966cf31e-efa8-4274-99bf-5c7393415c7d req-933ba976-8c29-483e-8a39-fefa9647f026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received event network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.294 239460 DEBUG oslo_concurrency.lockutils [req-966cf31e-efa8-4274-99bf-5c7393415c7d req-933ba976-8c29-483e-8a39-fefa9647f026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.295 239460 DEBUG oslo_concurrency.lockutils [req-966cf31e-efa8-4274-99bf-5c7393415c7d req-933ba976-8c29-483e-8a39-fefa9647f026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.295 239460 DEBUG oslo_concurrency.lockutils [req-966cf31e-efa8-4274-99bf-5c7393415c7d req-933ba976-8c29-483e-8a39-fefa9647f026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.295 239460 DEBUG nova.compute.manager [req-966cf31e-efa8-4274-99bf-5c7393415c7d req-933ba976-8c29-483e-8a39-fefa9647f026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Processing event network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.617 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707144.6166134, d4d33359-8cfc-4425-9ec5-362129170044 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.617 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] VM Started (Lifecycle Event)#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.619 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.622 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.624 239460 INFO nova.virt.libvirt.driver [-] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Instance spawned successfully.#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.624 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:19:04 np0005601226 podman[250073]: 2026-01-29 17:19:04.640035869 +0000 UTC m=+0.081115984 container create 92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.641 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.648 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.651 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.652 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.652 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.653 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.653 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.654 239460 DEBUG nova.virt.libvirt.driver [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:04 np0005601226 podman[250073]: 2026-01-29 17:19:04.583882097 +0000 UTC m=+0.024962262 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:19:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.684 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.684 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707144.6174903, d4d33359-8cfc-4425-9ec5-362129170044 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.684 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:19:04 np0005601226 systemd[1]: Started libpod-conmon-92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550.scope.
Jan 29 12:19:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 29 12:19:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 29 12:19:04 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.714 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e53acba1dc6e69f093345fa5fd360748ececd19186c396adb3bfe95fbe23e9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.716 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707144.6215, d4d33359-8cfc-4425-9ec5-362129170044 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.717 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:19:04 np0005601226 podman[250073]: 2026-01-29 17:19:04.731540435 +0000 UTC m=+0.172620590 container init 92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.732 239460 INFO nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Took 5.74 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.732 239460 DEBUG nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:04 np0005601226 podman[250073]: 2026-01-29 17:19:04.736112729 +0000 UTC m=+0.177192844 container start 92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.742 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.744 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:19:04 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[250090]: [NOTICE]   (250094) : New worker (250096) forked
Jan 29 12:19:04 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[250090]: [NOTICE]   (250094) : Loading success.
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.791 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.842 239460 INFO nova.compute.manager [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Took 6.94 seconds to build instance.#033[00m
Jan 29 12:19:04 np0005601226 nova_compute[239456]: 2026-01-29 17:19:04.861 239460 DEBUG oslo_concurrency.lockutils [None req-d65a189d-b2e6-485d-b3fc-7629afce7d9e 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 213 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 3.5 MiB/s wr, 126 op/s
Jan 29 12:19:05 np0005601226 nova_compute[239456]: 2026-01-29 17:19:05.898 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.096 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707131.0955536, e70499b1-fe73-43d6-b879-f6e0ab20b701 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.096 239460 INFO nova.compute.manager [-] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.103 239460 DEBUG oslo_concurrency.lockutils [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.104 239460 DEBUG oslo_concurrency.lockutils [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.114 239460 DEBUG nova.compute.manager [None req-6fee5025-0eb3-441e-8daa-e30f9cbd7e1e - - - - - -] [instance: e70499b1-fe73-43d6-b879-f6e0ab20b701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.117 239460 INFO nova.compute.manager [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Detaching volume e6227d89-43b2-41c2-989b-8b285344bda8#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.261 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.291 239460 INFO nova.virt.block_device [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Attempting to driver detach volume e6227d89-43b2-41c2-989b-8b285344bda8 from mountpoint /dev/vdb#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.299 239460 DEBUG nova.virt.libvirt.driver [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Attempting to detach device vdb from instance f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.299 239460 DEBUG nova.virt.libvirt.guest [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-e6227d89-43b2-41c2-989b-8b285344bda8">
Jan 29 12:19:06 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <serial>e6227d89-43b2-41c2-989b-8b285344bda8</serial>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:19:06 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.318 239460 INFO nova.virt.libvirt.driver [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully detached device vdb from instance f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 from the persistent domain config.#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.319 239460 DEBUG nova.virt.libvirt.driver [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.319 239460 DEBUG nova.virt.libvirt.guest [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-e6227d89-43b2-41c2-989b-8b285344bda8">
Jan 29 12:19:06 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <serial>e6227d89-43b2-41c2-989b-8b285344bda8</serial>
Jan 29 12:19:06 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:19:06 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:19:06 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.393 239460 DEBUG nova.compute.manager [req-26439a53-28a1-4fc5-969b-478875e4143d req-a5231294-74c5-46a6-9c87-8f6232211e94 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received event network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.394 239460 DEBUG oslo_concurrency.lockutils [req-26439a53-28a1-4fc5-969b-478875e4143d req-a5231294-74c5-46a6-9c87-8f6232211e94 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.395 239460 DEBUG oslo_concurrency.lockutils [req-26439a53-28a1-4fc5-969b-478875e4143d req-a5231294-74c5-46a6-9c87-8f6232211e94 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.395 239460 DEBUG oslo_concurrency.lockutils [req-26439a53-28a1-4fc5-969b-478875e4143d req-a5231294-74c5-46a6-9c87-8f6232211e94 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.396 239460 DEBUG nova.compute.manager [req-26439a53-28a1-4fc5-969b-478875e4143d req-a5231294-74c5-46a6-9c87-8f6232211e94 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] No waiting events found dispatching network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.396 239460 WARNING nova.compute.manager [req-26439a53-28a1-4fc5-969b-478875e4143d req-a5231294-74c5-46a6-9c87-8f6232211e94 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received unexpected event network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.453 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707146.4528854, f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.454 239460 DEBUG nova.virt.libvirt.driver [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.456 239460 INFO nova.virt.libvirt.driver [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully detached device vdb from instance f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 from the live domain config.#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.639 239460 DEBUG nova.objects.instance [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'flavor' on Instance uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.672 239460 DEBUG oslo_concurrency.lockutils [None req-3f0e3062-61fd-44c9-b22f-10fbc0b2bb3d aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.673 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.674 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.674 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.674 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.675 239460 INFO nova.compute.manager [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Terminating instance#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.676 239460 DEBUG nova.compute.manager [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:19:06 np0005601226 kernel: tapd7e6c36c-4b (unregistering): left promiscuous mode
Jan 29 12:19:06 np0005601226 NetworkManager[49020]: <info>  [1769707146.7326] device (tapd7e6c36c-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:19:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:06Z|00056|binding|INFO|Releasing lport d7e6c36c-4b5a-4578-af9a-56118f94ffc5 from this chassis (sb_readonly=0)
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.737 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:06Z|00057|binding|INFO|Setting lport d7e6c36c-4b5a-4578-af9a-56118f94ffc5 down in Southbound
Jan 29 12:19:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:06Z|00058|binding|INFO|Removing iface tapd7e6c36c-4b ovn-installed in OVS
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.741 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.745 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:06.746 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:ce:44 10.100.0.13'], port_security=['fa:16:3e:d5:ce:44 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f0dce8a1-b2b9-49db-8805-fd9b75fed5b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c65d65e6-04af-4892-ad96-3d83d148450f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7140162c4cd744d38e65ad1bcdadf016', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f027cde-583c-43d4-9cd2-5ffabc54095e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5702b8d-5b0f-4c7d-bc4d-4e202a7e2b31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=d7e6c36c-4b5a-4578-af9a-56118f94ffc5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:19:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:06.748 155625 INFO neutron.agent.ovn.metadata.agent [-] Port d7e6c36c-4b5a-4578-af9a-56118f94ffc5 in datapath c65d65e6-04af-4892-ad96-3d83d148450f unbound from our chassis#033[00m
Jan 29 12:19:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:06.749 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c65d65e6-04af-4892-ad96-3d83d148450f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:19:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:06.750 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c19d73-9feb-44bc-a115-23ee3781b72d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:06.750 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f namespace which is not needed anymore#033[00m
Jan 29 12:19:06 np0005601226 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 29 12:19:06 np0005601226 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 12.973s CPU time.
Jan 29 12:19:06 np0005601226 systemd-machined[207561]: Machine qemu-2-instance-00000002 terminated.
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.892 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.894 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.906 239460 INFO nova.virt.libvirt.driver [-] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Instance destroyed successfully.#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.906 239460 DEBUG nova.objects.instance [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'resources' on Instance uuid f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.925 239460 DEBUG nova.virt.libvirt.vif [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:18:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1490031336',display_name='tempest-VolumesSnapshotTestJSON-instance-1490031336',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1490031336',id=2,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKHUGsj9+rSHi2zYIrYlM5voP+SmeT8NPhKY2BWeEM0EvzN2A8jyIT0940OO1F9cpE1qyu/IQNauLfUufkcWbrGzw7QiYx+LgXRK8QgzdytsLW01R2lsc5ReoRFmrt9CUA==',key_name='tempest-keypair-219750126',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:18:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7140162c4cd744d38e65ad1bcdadf016',ramdisk_id='',reservation_id='r-zoqdgb1s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-783985999',owner_user_name='tempest-VolumesSnapshotTestJSON-783985999-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:18:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa90bbad088947a2a9866efeb934031e',uuid=f0dce8a1-b2b9-49db-8805-fd9b75fed5b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.925 239460 DEBUG nova.network.os_vif_util [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converting VIF {"id": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "address": "fa:16:3e:d5:ce:44", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7e6c36c-4b", "ovs_interfaceid": "d7e6c36c-4b5a-4578-af9a-56118f94ffc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.926 239460 DEBUG nova.network.os_vif_util [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ce:44,bridge_name='br-int',has_traffic_filtering=True,id=d7e6c36c-4b5a-4578-af9a-56118f94ffc5,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7e6c36c-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.926 239460 DEBUG os_vif [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ce:44,bridge_name='br-int',has_traffic_filtering=True,id=d7e6c36c-4b5a-4578-af9a-56118f94ffc5,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7e6c36c-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.929 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.929 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7e6c36c-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.932 239460 DEBUG nova.compute.manager [req-8ecf4a82-89a0-4d92-b6b7-2595f82a1828 req-0daac225-5cc5-494d-8f42-ad329ac22c27 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-vif-unplugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.933 239460 DEBUG oslo_concurrency.lockutils [req-8ecf4a82-89a0-4d92-b6b7-2595f82a1828 req-0daac225-5cc5-494d-8f42-ad329ac22c27 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.933 239460 DEBUG oslo_concurrency.lockutils [req-8ecf4a82-89a0-4d92-b6b7-2595f82a1828 req-0daac225-5cc5-494d-8f42-ad329ac22c27 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.933 239460 DEBUG oslo_concurrency.lockutils [req-8ecf4a82-89a0-4d92-b6b7-2595f82a1828 req-0daac225-5cc5-494d-8f42-ad329ac22c27 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.933 239460 DEBUG nova.compute.manager [req-8ecf4a82-89a0-4d92-b6b7-2595f82a1828 req-0daac225-5cc5-494d-8f42-ad329ac22c27 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] No waiting events found dispatching network-vif-unplugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.933 239460 DEBUG nova.compute.manager [req-8ecf4a82-89a0-4d92-b6b7-2595f82a1828 req-0daac225-5cc5-494d-8f42-ad329ac22c27 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-vif-unplugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.934 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:06 np0005601226 nova_compute[239456]: 2026-01-29 17:19:06.936 239460 INFO os_vif [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:ce:44,bridge_name='br-int',has_traffic_filtering=True,id=d7e6c36c-4b5a-4578-af9a-56118f94ffc5,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7e6c36c-4b')#033[00m
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [NOTICE]   (248306) : haproxy version is 2.8.14-c23fe91
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [NOTICE]   (248306) : path to executable is /usr/sbin/haproxy
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [WARNING]  (248306) : Exiting Master process...
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [WARNING]  (248306) : Exiting Master process...
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [ALERT]    (248306) : Current worker (248308) exited with code 143 (Terminated)
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[248302]: [WARNING]  (248306) : All workers exited. Exiting... (0)
Jan 29 12:19:07 np0005601226 systemd[1]: libpod-3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96.scope: Deactivated successfully.
Jan 29 12:19:07 np0005601226 podman[250130]: 2026-01-29 17:19:07.0156318 +0000 UTC m=+0.191733111 container died 3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.050 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.050 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.050 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.051 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.051 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.052 239460 INFO nova.compute.manager [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Terminating instance#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.053 239460 DEBUG nova.compute.manager [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:19:07 np0005601226 kernel: tap25bcb4c9-36 (unregistering): left promiscuous mode
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.158 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 NetworkManager[49020]: <info>  [1769707147.1614] device (tap25bcb4c9-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:19:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:07Z|00059|binding|INFO|Releasing lport 25bcb4c9-3633-4bb0-96e1-5749df2814c3 from this chassis (sb_readonly=0)
Jan 29 12:19:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:07Z|00060|binding|INFO|Setting lport 25bcb4c9-3633-4bb0-96e1-5749df2814c3 down in Southbound
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.165 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:07Z|00061|binding|INFO|Removing iface tap25bcb4c9-36 ovn-installed in OVS
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.166 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.171 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.172 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:62:2a 10.100.0.7'], port_security=['fa:16:3e:b3:62:2a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd4d33359-8cfc-4425-9ec5-362129170044', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-894c211c-3e65-4d00-831b-021ae0267115', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36b7f0db63d84c34b521603b194a3d9b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66c5d1d7-dfe8-4ff3-b9ba-ea8b4f693602', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0a118e9-f2d1-494c-89ac-62b6957c48ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=25bcb4c9-3633-4bb0-96e1-5749df2814c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:19:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96-userdata-shm.mount: Deactivated successfully.
Jan 29 12:19:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b50dc4c2bcaa1adb3ccbf825d818226c164e63732f13af2b574f54a96483bac2-merged.mount: Deactivated successfully.
Jan 29 12:19:07 np0005601226 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 29 12:19:07 np0005601226 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2.945s CPU time.
Jan 29 12:19:07 np0005601226 systemd-machined[207561]: Machine qemu-4-instance-00000004 terminated.
Jan 29 12:19:07 np0005601226 podman[250130]: 2026-01-29 17:19:07.26673187 +0000 UTC m=+0.442833181 container cleanup 3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.282 239460 INFO nova.virt.libvirt.driver [-] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Instance destroyed successfully.#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.283 239460 DEBUG nova.objects.instance [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lazy-loading 'resources' on Instance uuid d4d33359-8cfc-4425-9ec5-362129170044 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.295 239460 DEBUG nova.virt.libvirt.vif [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:18:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2114964595',display_name='tempest-VolumesActionsTest-instance-2114964595',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2114964595',id=4,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36b7f0db63d84c34b521603b194a3d9b',ramdisk_id='',reservation_id='r-0v09xzr3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-927778289',owner_user_name='tempest-VolumesActionsTest-927778289-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:19:04Z,user_data=None,user_id='814a809cf2434fc5bdc86a907c6f923d',uuid=d4d33359-8cfc-4425-9ec5-362129170044,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.295 239460 DEBUG nova.network.os_vif_util [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converting VIF {"id": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "address": "fa:16:3e:b3:62:2a", "network": {"id": "894c211c-3e65-4d00-831b-021ae0267115", "bridge": "br-int", "label": "tempest-VolumesActionsTest-174971779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36b7f0db63d84c34b521603b194a3d9b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25bcb4c9-36", "ovs_interfaceid": "25bcb4c9-3633-4bb0-96e1-5749df2814c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.296 239460 DEBUG nova.network.os_vif_util [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:62:2a,bridge_name='br-int',has_traffic_filtering=True,id=25bcb4c9-3633-4bb0-96e1-5749df2814c3,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25bcb4c9-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.296 239460 DEBUG os_vif [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:62:2a,bridge_name='br-int',has_traffic_filtering=True,id=25bcb4c9-3633-4bb0-96e1-5749df2814c3,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25bcb4c9-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.297 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.298 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25bcb4c9-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.299 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.301 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.303 239460 INFO os_vif [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:62:2a,bridge_name='br-int',has_traffic_filtering=True,id=25bcb4c9-3633-4bb0-96e1-5749df2814c3,network=Network(894c211c-3e65-4d00-831b-021ae0267115),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25bcb4c9-36')#033[00m
Jan 29 12:19:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 29 12:19:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 29 12:19:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 29 12:19:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 213 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.8 MiB/s wr, 98 op/s
Jan 29 12:19:07 np0005601226 podman[250203]: 2026-01-29 17:19:07.690798597 +0000 UTC m=+0.401051651 container remove 3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.694 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0d09ceb2-46fd-419e-bc4b-05995be29376]: (4, ('Thu Jan 29 05:19:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f (3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96)\n3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96\nThu Jan 29 05:19:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f (3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96)\n3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.697 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7b42c0e7-d9d6-480f-b8ff-c25810fd4b80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.698 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc65d65e6-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.756 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 systemd[1]: libpod-conmon-3f4f354ae6c56650d36127fbd8c4cd80d0cf74802bed489095f4ae95a58d1d96.scope: Deactivated successfully.
Jan 29 12:19:07 np0005601226 kernel: tapc65d65e6-00: left promiscuous mode
Jan 29 12:19:07 np0005601226 nova_compute[239456]: 2026-01-29 17:19:07.761 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.764 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b63565-b7be-4b42-8149-858cbb59966b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.782 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6d27ee2b-b7e8-44c5-841a-33d060c7b533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.783 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[54228d52-3579-4e38-8395-93c3ba37b4ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.795 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2a70539a-d824-46b8-90d2-3d45f9f24900]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444851, 'reachable_time': 28060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250253, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.797 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:19:07 np0005601226 systemd[1]: run-netns-ovnmeta\x2dc65d65e6\x2d04af\x2d4892\x2dad96\x2d3d83d148450f.mount: Deactivated successfully.
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.798 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[906b3d7c-bd4e-45c4-aded-72a492191112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.798 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 25bcb4c9-3633-4bb0-96e1-5749df2814c3 in datapath 894c211c-3e65-4d00-831b-021ae0267115 unbound from our chassis#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.799 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 894c211c-3e65-4d00-831b-021ae0267115, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.800 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[65ff14fa-8a5e-43f3-86d4-e78150e8d19e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:07.800 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 namespace which is not needed anymore#033[00m
Jan 29 12:19:07 np0005601226 podman[250235]: 2026-01-29 17:19:07.844223972 +0000 UTC m=+0.067480821 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:19:07 np0005601226 podman[250237]: 2026-01-29 17:19:07.874858448 +0000 UTC m=+0.096267447 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[250090]: [NOTICE]   (250094) : haproxy version is 2.8.14-c23fe91
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[250090]: [NOTICE]   (250094) : path to executable is /usr/sbin/haproxy
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[250090]: [ALERT]    (250094) : Current worker (250096) exited with code 143 (Terminated)
Jan 29 12:19:07 np0005601226 neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115[250090]: [WARNING]  (250094) : All workers exited. Exiting... (0)
Jan 29 12:19:07 np0005601226 systemd[1]: libpod-92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550.scope: Deactivated successfully.
Jan 29 12:19:08 np0005601226 podman[250297]: 2026-01-29 17:19:08.003343383 +0000 UTC m=+0.141841111 container died 92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:19:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550-userdata-shm.mount: Deactivated successfully.
Jan 29 12:19:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1e53acba1dc6e69f093345fa5fd360748ececd19186c396adb3bfe95fbe23e9e-merged.mount: Deactivated successfully.
Jan 29 12:19:08 np0005601226 podman[250297]: 2026-01-29 17:19:08.3878098 +0000 UTC m=+0.526307568 container cleanup 92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:19:08 np0005601226 podman[250324]: 2026-01-29 17:19:08.468025039 +0000 UTC m=+0.063279748 container remove 92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.471 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[278ab05e-5439-431e-bcea-0930ce399690]: (4, ('Thu Jan 29 05:19:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 (92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550)\n92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550\nThu Jan 29 05:19:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 (92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550)\n92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.473 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[22ce38e3-e93a-4cac-92d8-dc9070b3ee19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.474 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap894c211c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:08 np0005601226 kernel: tap894c211c-30: left promiscuous mode
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.475 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.480 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:08 np0005601226 systemd[1]: libpod-conmon-92d6be90f1b295886c3b8cbd345916f801d3367d3d412c965c4c7c9f413ed550.scope: Deactivated successfully.
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.484 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a685b352-f67a-4321-854c-2e8444203a4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.496 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9189494c-34b2-497f-901b-3322d7c8843f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.498 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[abc3b2e3-0c42-4410-8eb9-01393d7679b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.516 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e65749bf-e9f6-4df2-8ea6-5223d6624ef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449762, 'reachable_time': 16448, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250342, 'error': None, 'target': 'ovnmeta-894c211c-3e65-4d00-831b-021ae0267115', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.517 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-894c211c-3e65-4d00-831b-021ae0267115 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:19:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:08.517 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[ec15054a-124f-401c-8741-cb10c554518d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:08 np0005601226 systemd[1]: run-netns-ovnmeta\x2d894c211c\x2d3e65\x2d4d00\x2d831b\x2d021ae0267115.mount: Deactivated successfully.
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.601 239460 INFO nova.virt.libvirt.driver [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Deleting instance files /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_del#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.604 239460 INFO nova.virt.libvirt.driver [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Deletion of /var/lib/nova/instances/f0dce8a1-b2b9-49db-8805-fd9b75fed5b5_del complete#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.614 239460 DEBUG nova.compute.manager [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received event network-vif-unplugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.614 239460 DEBUG oslo_concurrency.lockutils [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.614 239460 DEBUG oslo_concurrency.lockutils [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.615 239460 DEBUG oslo_concurrency.lockutils [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.615 239460 DEBUG nova.compute.manager [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] No waiting events found dispatching network-vif-unplugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.616 239460 DEBUG nova.compute.manager [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received event network-vif-unplugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.616 239460 DEBUG nova.compute.manager [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received event network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.616 239460 DEBUG oslo_concurrency.lockutils [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d4d33359-8cfc-4425-9ec5-362129170044-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.617 239460 DEBUG oslo_concurrency.lockutils [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.617 239460 DEBUG oslo_concurrency.lockutils [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.618 239460 DEBUG nova.compute.manager [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] No waiting events found dispatching network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.618 239460 WARNING nova.compute.manager [req-d6192708-8698-4008-bdb4-7af892b9bf97 req-3b654dfe-b72e-44aa-85b8-1126bc9cf801 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received unexpected event network-vif-plugged-25bcb4c9-3633-4bb0-96e1-5749df2814c3 for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.656 239460 INFO nova.compute.manager [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Took 1.98 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.657 239460 DEBUG oslo.service.loopingcall [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.657 239460 DEBUG nova.compute.manager [-] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.657 239460 DEBUG nova.network.neutron [-] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.733 239460 INFO nova.virt.libvirt.driver [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Deleting instance files /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044_del#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.734 239460 INFO nova.virt.libvirt.driver [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Deletion of /var/lib/nova/instances/d4d33359-8cfc-4425-9ec5-362129170044_del complete#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.781 239460 INFO nova.compute.manager [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Took 1.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.782 239460 DEBUG oslo.service.loopingcall [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.782 239460 DEBUG nova.compute.manager [-] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:19:08 np0005601226 nova_compute[239456]: 2026-01-29 17:19:08.782 239460 DEBUG nova.network.neutron [-] [instance: d4d33359-8cfc-4425-9ec5-362129170044] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.044 239460 DEBUG nova.compute.manager [req-775b0dd7-9ad8-4aea-8efc-da4f7b615c4f req-73e10067-0794-43c5-8b2b-cf544e52fe6d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.044 239460 DEBUG oslo_concurrency.lockutils [req-775b0dd7-9ad8-4aea-8efc-da4f7b615c4f req-73e10067-0794-43c5-8b2b-cf544e52fe6d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.044 239460 DEBUG oslo_concurrency.lockutils [req-775b0dd7-9ad8-4aea-8efc-da4f7b615c4f req-73e10067-0794-43c5-8b2b-cf544e52fe6d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.044 239460 DEBUG oslo_concurrency.lockutils [req-775b0dd7-9ad8-4aea-8efc-da4f7b615c4f req-73e10067-0794-43c5-8b2b-cf544e52fe6d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.044 239460 DEBUG nova.compute.manager [req-775b0dd7-9ad8-4aea-8efc-da4f7b615c4f req-73e10067-0794-43c5-8b2b-cf544e52fe6d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] No waiting events found dispatching network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.045 239460 WARNING nova.compute.manager [req-775b0dd7-9ad8-4aea-8efc-da4f7b615c4f req-73e10067-0794-43c5-8b2b-cf544e52fe6d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received unexpected event network-vif-plugged-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:19:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 155 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 958 KiB/s wr, 274 op/s
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.692 239460 DEBUG nova.network.neutron [-] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.711 239460 INFO nova.compute.manager [-] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Took 0.93 seconds to deallocate network for instance.#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.747 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.748 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:09 np0005601226 nova_compute[239456]: 2026-01-29 17:19:09.803 239460 DEBUG oslo_concurrency.processutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.283 239460 DEBUG nova.network.neutron [-] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:19:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:19:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196480703' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.301 239460 INFO nova.compute.manager [-] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Took 1.64 seconds to deallocate network for instance.#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.302 239460 DEBUG oslo_concurrency.processutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.309 239460 DEBUG nova.compute.provider_tree [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.323 239460 DEBUG nova.scheduler.client.report [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.345 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.369 239460 INFO nova.scheduler.client.report [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Deleted allocations for instance d4d33359-8cfc-4425-9ec5-362129170044#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.385 239460 WARNING nova.volume.cinder [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Attachment 028aa045-280a-410e-a716-667d37da6476 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 028aa045-280a-410e-a716-667d37da6476. (HTTP 404) (Request-ID: req-63f5e0f2-e9ea-4168-baf1-e7d580862318)#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.386 239460 INFO nova.compute.manager [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Took 0.08 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.438 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.438 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.446 239460 DEBUG oslo_concurrency.lockutils [None req-a6c3d070-e8d6-4d22-99d7-a56cd28ab406 814a809cf2434fc5bdc86a907c6f923d 36b7f0db63d84c34b521603b194a3d9b - - default default] Lock "d4d33359-8cfc-4425-9ec5-362129170044" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.396s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.475 239460 DEBUG oslo_concurrency.processutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:19:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:19:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:19:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:19:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:19:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.683 239460 DEBUG nova.compute.manager [req-8b9e02de-ecf1-4ffe-9e9d-809b7863b2b4 req-b1fca93d-3355-4c33-976c-1aeb7954dcbe 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Received event network-vif-deleted-25bcb4c9-3633-4bb0-96e1-5749df2814c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:10 np0005601226 nova_compute[239456]: 2026-01-29 17:19:10.900 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:19:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697822672' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:19:11 np0005601226 nova_compute[239456]: 2026-01-29 17:19:11.016 239460 DEBUG oslo_concurrency.processutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:11 np0005601226 nova_compute[239456]: 2026-01-29 17:19:11.021 239460 DEBUG nova.compute.provider_tree [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:19:11 np0005601226 nova_compute[239456]: 2026-01-29 17:19:11.048 239460 DEBUG nova.scheduler.client.report [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:19:11 np0005601226 nova_compute[239456]: 2026-01-29 17:19:11.079 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:11 np0005601226 nova_compute[239456]: 2026-01-29 17:19:11.104 239460 INFO nova.scheduler.client.report [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Deleted allocations for instance f0dce8a1-b2b9-49db-8805-fd9b75fed5b5#033[00m
Jan 29 12:19:11 np0005601226 nova_compute[239456]: 2026-01-29 17:19:11.120 239460 DEBUG nova.compute.manager [req-f847a4fa-1678-4f11-9c6f-d242e9069582 req-2a04f537-5d60-4eaa-8966-3d4ec30b82f4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Received event network-vif-deleted-d7e6c36c-4b5a-4578-af9a-56118f94ffc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:11 np0005601226 nova_compute[239456]: 2026-01-29 17:19:11.165 239460 DEBUG oslo_concurrency.lockutils [None req-f2564cd5-3732-4888-9982-732f44838136 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "f0dce8a1-b2b9-49db-8805-fd9b75fed5b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 110 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 724 KiB/s wr, 251 op/s
Jan 29 12:19:12 np0005601226 nova_compute[239456]: 2026-01-29 17:19:12.300 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 29 12:19:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 29 12:19:12 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 29 12:19:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 88 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 201 op/s
Jan 29 12:19:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:19:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145148606' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:19:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:19:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145148606' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:19:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:19:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4825 writes, 21K keys, 4825 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4825 writes, 4825 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1576 writes, 7300 keys, 1576 commit groups, 1.0 writes per commit group, ingest: 10.00 MB, 0.02 MB/s#012Interval WAL: 1576 writes, 1576 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     19.1      1.37              0.05        12    0.114       0      0       0.0       0.0#012  L6      1/0    7.79 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     47.6     39.3      2.20              0.19        11    0.200     49K   5827       0.0       0.0#012 Sum      1/0    7.79 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     29.4     31.5      3.57              0.23        23    0.155     49K   5827       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6     55.8     55.9      1.06              0.13        12    0.088     29K   3615       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     47.6     39.3      2.20              0.19        11    0.200     49K   5827       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     19.2      1.36              0.05        11    0.123       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.026, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.11 GB write, 0.06 MB/s write, 0.10 GB read, 0.06 MB/s read, 3.6 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d2b32758d0#2 capacity: 304.00 MB usage: 9.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 8.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(569,9.11 MB,2.99735%) FilterBlock(24,145.30 KB,0.0466748%) IndexBlock(24,277.16 KB,0.089033%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 29 12:19:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 29 12:19:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 29 12:19:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 29 12:19:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 99 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 235 op/s
Jan 29 12:19:15 np0005601226 nova_compute[239456]: 2026-01-29 17:19:15.902 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:17 np0005601226 nova_compute[239456]: 2026-01-29 17:19:17.303 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 99 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 699 KiB/s rd, 3.0 MiB/s wr, 97 op/s
Jan 29 12:19:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 29 12:19:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 29 12:19:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 29 12:19:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 29 12:19:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 29 12:19:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 29 12:19:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 233 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 32 MiB/s wr, 111 op/s
Jan 29 12:19:20 np0005601226 nova_compute[239456]: 2026-01-29 17:19:20.904 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 289 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 37 MiB/s wr, 104 op/s
Jan 29 12:19:21 np0005601226 nova_compute[239456]: 2026-01-29 17:19:21.904 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707146.9034119, f0dce8a1-b2b9-49db-8805-fd9b75fed5b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:21 np0005601226 nova_compute[239456]: 2026-01-29 17:19:21.905 239460 INFO nova.compute.manager [-] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:19:21 np0005601226 nova_compute[239456]: 2026-01-29 17:19:21.949 239460 DEBUG nova.compute.manager [None req-2028f0d0-fe8a-4c44-b66e-5d10845845b1 - - - - - -] [instance: f0dce8a1-b2b9-49db-8805-fd9b75fed5b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:22 np0005601226 nova_compute[239456]: 2026-01-29 17:19:22.280 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707147.279806, d4d33359-8cfc-4425-9ec5-362129170044 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:22 np0005601226 nova_compute[239456]: 2026-01-29 17:19:22.280 239460 INFO nova.compute.manager [-] [instance: d4d33359-8cfc-4425-9ec5-362129170044] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:19:22 np0005601226 nova_compute[239456]: 2026-01-29 17:19:22.304 239460 DEBUG nova.compute.manager [None req-e3dec7b4-3779-4a7a-afee-f6ef362ed5f4 - - - - - -] [instance: d4d33359-8cfc-4425-9ec5-362129170044] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:22 np0005601226 nova_compute[239456]: 2026-01-29 17:19:22.305 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 29 12:19:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 29 12:19:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.624 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.624 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.655 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.655 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.655 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.655 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:19:23 np0005601226 nova_compute[239456]: 2026-01-29 17:19:23.656 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 345 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 47 MiB/s wr, 107 op/s
Jan 29 12:19:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:19:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4065402996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.251 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.425 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.427 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4709MB free_disk=59.98825147841126GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.427 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.427 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.492 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.493 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:19:24 np0005601226 nova_compute[239456]: 2026-01-29 17:19:24.508 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:19:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400863562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:19:25 np0005601226 nova_compute[239456]: 2026-01-29 17:19:25.090 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:25 np0005601226 nova_compute[239456]: 2026-01-29 17:19:25.097 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:19:25 np0005601226 nova_compute[239456]: 2026-01-29 17:19:25.120 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:19:25 np0005601226 nova_compute[239456]: 2026-01-29 17:19:25.145 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:19:25 np0005601226 nova_compute[239456]: 2026-01-29 17:19:25.146 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 433 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 46 MiB/s wr, 104 op/s
Jan 29 12:19:25 np0005601226 nova_compute[239456]: 2026-01-29 17:19:25.906 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.126 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.126 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.126 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.127 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.155 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.155 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.155 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:27 np0005601226 nova_compute[239456]: 2026-01-29 17:19:27.307 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 433 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 23 MiB/s wr, 48 op/s
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.555 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.555 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.570 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.624 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.625 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.630 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.631 239460 INFO nova.compute.claims [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:19:28 np0005601226 nova_compute[239456]: 2026-01-29 17:19:28.731 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:19:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500267231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.232 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.236 239460 DEBUG nova.compute.provider_tree [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.252 239460 DEBUG nova.scheduler.client.report [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.274 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.275 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.322 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.322 239460 DEBUG nova.network.neutron [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.347 239460 INFO nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.379 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.470 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.471 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.471 239460 INFO nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Creating image(s)#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.490 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.512 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.533 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.536 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.584 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.585 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.585 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.586 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.602 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.605 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.618 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.618 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:29 np0005601226 nova_compute[239456]: 2026-01-29 17:19:29.618 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:19:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 577 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 34 MiB/s wr, 65 op/s
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.096 239460 DEBUG nova.policy [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aa90bbad088947a2a9866efeb934031e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7140162c4cd744d38e65ad1bcdadf016', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.218 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.289 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] resizing rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.427 239460 DEBUG nova.objects.instance [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'migration_context' on Instance uuid dca948a3-675d-4cc4-a21b-c2f72cbe307e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.447 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.448 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Ensure instance console log exists: /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.448 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.448 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.448 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:19:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992210983' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:19:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:19:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992210983' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.908 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:30 np0005601226 nova_compute[239456]: 2026-01-29 17:19:30.971 239460 DEBUG nova.network.neutron [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Successfully created port: 3af9ad9d-906f-4ed9-92cc-783df6775a8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:19:31 np0005601226 nova_compute[239456]: 2026-01-29 17:19:31.585 239460 DEBUG nova.network.neutron [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Successfully updated port: 3af9ad9d-906f-4ed9-92cc-783df6775a8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:19:31 np0005601226 nova_compute[239456]: 2026-01-29 17:19:31.600 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:19:31 np0005601226 nova_compute[239456]: 2026-01-29 17:19:31.600 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquired lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:19:31 np0005601226 nova_compute[239456]: 2026-01-29 17:19:31.600 239460 DEBUG nova.network.neutron [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:19:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 638 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 34 MiB/s wr, 56 op/s
Jan 29 12:19:31 np0005601226 nova_compute[239456]: 2026-01-29 17:19:31.694 239460 DEBUG nova.compute.manager [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-changed-3af9ad9d-906f-4ed9-92cc-783df6775a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:31 np0005601226 nova_compute[239456]: 2026-01-29 17:19:31.694 239460 DEBUG nova.compute.manager [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Refreshing instance network info cache due to event network-changed-3af9ad9d-906f-4ed9-92cc-783df6775a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:19:31 np0005601226 nova_compute[239456]: 2026-01-29 17:19:31.694 239460 DEBUG oslo_concurrency.lockutils [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:19:32 np0005601226 nova_compute[239456]: 2026-01-29 17:19:32.310 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:32 np0005601226 nova_compute[239456]: 2026-01-29 17:19:32.449 239460 DEBUG nova.network.neutron [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:19:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 690 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 35 MiB/s wr, 52 op/s
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.751 239460 DEBUG nova.network.neutron [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Updating instance_info_cache with network_info: [{"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.770 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Releasing lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.770 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Instance network_info: |[{"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.771 239460 DEBUG oslo_concurrency.lockutils [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.771 239460 DEBUG nova.network.neutron [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Refreshing network info cache for port 3af9ad9d-906f-4ed9-92cc-783df6775a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.774 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Start _get_guest_xml network_info=[{"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.781 239460 WARNING nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.801 239460 DEBUG nova.virt.libvirt.host [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.802 239460 DEBUG nova.virt.libvirt.host [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.806 239460 DEBUG nova.virt.libvirt.host [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.808 239460 DEBUG nova.virt.libvirt.host [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.808 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.809 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.809 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.811 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.812 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.812 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.812 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.812 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.812 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.813 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.813 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.813 239460 DEBUG nova.virt.hardware [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:19:33 np0005601226 nova_compute[239456]: 2026-01-29 17:19:33.816 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:19:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146655847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.341 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.361 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.364 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:19:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629724538' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.927 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.929 239460 DEBUG nova.virt.libvirt.vif [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:19:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-573201396',display_name='tempest-VolumesSnapshotTestJSON-instance-573201396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-573201396',id=5,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP57F9qb6HTMT/dg+aHSslBWlwXsZcaffFbF5qLZpeZLt1faAk6NJ/UzAHXHt1SsCajCxlFNwojj/ACHu7g92aCv6V5JBo78DqFMSFFa88vlG4NWfNJ2XKEp1kmwuxVTOg==',key_name='tempest-keypair-1508051679',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7140162c4cd744d38e65ad1bcdadf016',ramdisk_id='',reservation_id='r-wvxfy812',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-783985999',owner_user_name='tempest-VolumesSnapshotTestJSON-783985999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:19:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa90bbad088947a2a9866efeb934031e',uuid=dca948a3-675d-4cc4-a21b-c2f72cbe307e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.929 239460 DEBUG nova.network.os_vif_util [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converting VIF {"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.931 239460 DEBUG nova.network.os_vif_util [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:99,bridge_name='br-int',has_traffic_filtering=True,id=3af9ad9d-906f-4ed9-92cc-783df6775a8e,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af9ad9d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.933 239460 DEBUG nova.objects.instance [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'pci_devices' on Instance uuid dca948a3-675d-4cc4-a21b-c2f72cbe307e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.951 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <uuid>dca948a3-675d-4cc4-a21b-c2f72cbe307e</uuid>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <name>instance-00000005</name>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesSnapshotTestJSON-instance-573201396</nova:name>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:19:33</nova:creationTime>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:user uuid="aa90bbad088947a2a9866efeb934031e">tempest-VolumesSnapshotTestJSON-783985999-project-member</nova:user>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:project uuid="7140162c4cd744d38e65ad1bcdadf016">tempest-VolumesSnapshotTestJSON-783985999</nova:project>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <nova:port uuid="3af9ad9d-906f-4ed9-92cc-783df6775a8e">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <entry name="serial">dca948a3-675d-4cc4-a21b-c2f72cbe307e</entry>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <entry name="uuid">dca948a3-675d-4cc4-a21b-c2f72cbe307e</entry>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk.config">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:a8:bd:99"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <target dev="tap3af9ad9d-90"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/console.log" append="off"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:19:34 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:19:34 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:19:34 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:19:34 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.952 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Preparing to wait for external event network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.953 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.953 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.953 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.954 239460 DEBUG nova.virt.libvirt.vif [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:19:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-573201396',display_name='tempest-VolumesSnapshotTestJSON-instance-573201396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-573201396',id=5,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP57F9qb6HTMT/dg+aHSslBWlwXsZcaffFbF5qLZpeZLt1faAk6NJ/UzAHXHt1SsCajCxlFNwojj/ACHu7g92aCv6V5JBo78DqFMSFFa88vlG4NWfNJ2XKEp1kmwuxVTOg==',key_name='tempest-keypair-1508051679',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7140162c4cd744d38e65ad1bcdadf016',ramdisk_id='',reservation_id='r-wvxfy812',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-783985999',owner_user_name='tempest-VolumesSnapshotTestJSON-783985999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:19:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa90bbad088947a2a9866efeb934031e',uuid=dca948a3-675d-4cc4-a21b-c2f72cbe307e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.954 239460 DEBUG nova.network.os_vif_util [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converting VIF {"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.956 239460 DEBUG nova.network.os_vif_util [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:99,bridge_name='br-int',has_traffic_filtering=True,id=3af9ad9d-906f-4ed9-92cc-783df6775a8e,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af9ad9d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.957 239460 DEBUG os_vif [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:99,bridge_name='br-int',has_traffic_filtering=True,id=3af9ad9d-906f-4ed9-92cc-783df6775a8e,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af9ad9d-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.957 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.958 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.958 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.962 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.963 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3af9ad9d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.963 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3af9ad9d-90, col_values=(('external_ids', {'iface-id': '3af9ad9d-906f-4ed9-92cc-783df6775a8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:bd:99', 'vm-uuid': 'dca948a3-675d-4cc4-a21b-c2f72cbe307e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.965 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:34 np0005601226 NetworkManager[49020]: <info>  [1769707174.9662] manager: (tap3af9ad9d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.968 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.973 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:34 np0005601226 nova_compute[239456]: 2026-01-29 17:19:34.974 239460 INFO os_vif [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:bd:99,bridge_name='br-int',has_traffic_filtering=True,id=3af9ad9d-906f-4ed9-92cc-783df6775a8e,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af9ad9d-90')#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.024 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.024 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.025 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No VIF found with MAC fa:16:3e:a8:bd:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.025 239460 INFO nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Using config drive#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.042 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:19:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 840 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 39 MiB/s wr, 78 op/s
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.676 239460 DEBUG nova.network.neutron [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Updated VIF entry in instance network info cache for port 3af9ad9d-906f-4ed9-92cc-783df6775a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.676 239460 DEBUG nova.network.neutron [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Updating instance_info_cache with network_info: [{"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.691 239460 DEBUG oslo_concurrency.lockutils [req-f948d8c3-2cae-4b62-aed7-5ff76f4ede5a req-dbd7a8de-374d-401d-a4b2-75101ae1da3c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.791 239460 INFO nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Creating config drive at /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/disk.config#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.795 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7boe8z_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.910 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.915 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn7boe8z_" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.933 239460 DEBUG nova.storage.rbd_utils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] rbd image dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:19:35 np0005601226 nova_compute[239456]: 2026-01-29 17:19:35.936 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/disk.config dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.509 239460 DEBUG oslo_concurrency.processutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/disk.config dca948a3-675d-4cc4-a21b-c2f72cbe307e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.509 239460 INFO nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Deleting local config drive /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e/disk.config because it was imported into RBD.#033[00m
Jan 29 12:19:36 np0005601226 kernel: tap3af9ad9d-90: entered promiscuous mode
Jan 29 12:19:36 np0005601226 NetworkManager[49020]: <info>  [1769707176.5459] manager: (tap3af9ad9d-90): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.578 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:36Z|00062|binding|INFO|Claiming lport 3af9ad9d-906f-4ed9-92cc-783df6775a8e for this chassis.
Jan 29 12:19:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:36Z|00063|binding|INFO|3af9ad9d-906f-4ed9-92cc-783df6775a8e: Claiming fa:16:3e:a8:bd:99 10.100.0.10
Jan 29 12:19:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:36Z|00064|binding|INFO|Setting lport 3af9ad9d-906f-4ed9-92cc-783df6775a8e ovn-installed in OVS
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.587 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.589 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:36Z|00065|binding|INFO|Setting lport 3af9ad9d-906f-4ed9-92cc-783df6775a8e up in Southbound
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.592 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:bd:99 10.100.0.10'], port_security=['fa:16:3e:a8:bd:99 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dca948a3-675d-4cc4-a21b-c2f72cbe307e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c65d65e6-04af-4892-ad96-3d83d148450f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7140162c4cd744d38e65ad1bcdadf016', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a54deeaf-8fce-4574-9b38-5606fce0457a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5702b8d-5b0f-4c7d-bc4d-4e202a7e2b31, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3af9ad9d-906f-4ed9-92cc-783df6775a8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.593 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3af9ad9d-906f-4ed9-92cc-783df6775a8e in datapath c65d65e6-04af-4892-ad96-3d83d148450f bound to our chassis#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.594 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c65d65e6-04af-4892-ad96-3d83d148450f#033[00m
Jan 29 12:19:36 np0005601226 systemd-machined[207561]: New machine qemu-5-instance-00000005.
Jan 29 12:19:36 np0005601226 systemd-udevd[250756]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.601 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[68db827f-58d3-4d51-a967-b2d35812dead]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.602 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc65d65e6-01 in ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.603 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc65d65e6-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.603 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6b7d84-5f85-455c-9c95-a68439647805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.604 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[37b60b4b-68fb-4271-be09-2ceafc586940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 NetworkManager[49020]: <info>  [1769707176.6067] device (tap3af9ad9d-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:19:36 np0005601226 NetworkManager[49020]: <info>  [1769707176.6083] device (tap3af9ad9d-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.614 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[4ecfe965-adef-4c17-bc22-362ca3c24f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.632 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[060e2359-01d9-4a92-8ab0-47c88e46ee8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.655 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[a359130a-35d8-4a33-9696-818cca529076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.660 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4269c7-07bf-4cfc-9e22-3b8bab2afb20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 NetworkManager[49020]: <info>  [1769707176.6605] manager: (tapc65d65e6-00): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 29 12:19:36 np0005601226 systemd-udevd[250760]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.679 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[4619f33d-fd39-4fb0-8f50-718e2186261f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.682 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[467840e5-27f9-4c4b-aa27-39942458347b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 NetworkManager[49020]: <info>  [1769707176.6955] device (tapc65d65e6-00): carrier: link connected
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.698 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0656a4-7a53-40cb-97be-4aa0d60f5a2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.708 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[04297ecd-44bd-4ebf-80b5-5b430964e743]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc65d65e6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:66:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453019, 'reachable_time': 24598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250789, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.723 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[306bb2df-7be4-423a-9df4-7650721c59f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:66d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453019, 'tstamp': 453019}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250790, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.743 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd90891-e6a5-40ef-bf50-f47aaf5b5f6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc65d65e6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:66:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453019, 'reachable_time': 24598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250791, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.777 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7502d7d1-6856-4e26-8bcb-6cc1936f0dd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.789 239460 DEBUG nova.compute.manager [req-53aa3521-5bec-4d61-9cc2-bcc647034ac9 req-a8aacb02-737c-47f9-bfc9-79e46623408d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.790 239460 DEBUG oslo_concurrency.lockutils [req-53aa3521-5bec-4d61-9cc2-bcc647034ac9 req-a8aacb02-737c-47f9-bfc9-79e46623408d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.790 239460 DEBUG oslo_concurrency.lockutils [req-53aa3521-5bec-4d61-9cc2-bcc647034ac9 req-a8aacb02-737c-47f9-bfc9-79e46623408d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.790 239460 DEBUG oslo_concurrency.lockutils [req-53aa3521-5bec-4d61-9cc2-bcc647034ac9 req-a8aacb02-737c-47f9-bfc9-79e46623408d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.790 239460 DEBUG nova.compute.manager [req-53aa3521-5bec-4d61-9cc2-bcc647034ac9 req-a8aacb02-737c-47f9-bfc9-79e46623408d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Processing event network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.824 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ded67b33-9946-4a22-94bc-aab04a99c519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.825 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc65d65e6-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.826 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.826 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc65d65e6-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.827 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 NetworkManager[49020]: <info>  [1769707176.8282] manager: (tapc65d65e6-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 29 12:19:36 np0005601226 kernel: tapc65d65e6-00: entered promiscuous mode
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.830 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.830 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc65d65e6-00, col_values=(('external_ids', {'iface-id': '56fcfe53-391b-4f05-a182-2812cd40a46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.831 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:36Z|00066|binding|INFO|Releasing lport 56fcfe53-391b-4f05-a182-2812cd40a46e from this chassis (sb_readonly=0)
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.832 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.832 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c65d65e6-04af-4892-ad96-3d83d148450f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c65d65e6-04af-4892-ad96-3d83d148450f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:19:36 np0005601226 nova_compute[239456]: 2026-01-29 17:19:36.836 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.835 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d93ad9d3-bfb3-4a5b-8447-4c1c08fc1e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.836 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-c65d65e6-04af-4892-ad96-3d83d148450f
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/c65d65e6-04af-4892-ad96-3d83d148450f.pid.haproxy
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID c65d65e6-04af-4892-ad96-3d83d148450f
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:19:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:36.837 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'env', 'PROCESS_TAG=haproxy-c65d65e6-04af-4892-ad96-3d83d148450f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c65d65e6-04af-4892-ad96-3d83d148450f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:19:37 np0005601226 podman[250841]: 2026-01-29 17:19:37.15275174 +0000 UTC m=+0.045015409 container create 7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 29 12:19:37 np0005601226 systemd[1]: Started libpod-conmon-7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f.scope.
Jan 29 12:19:37 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e2c7e54afddf2932a2b7de3280df5b2c20808fa973e96bbd1e587803498a5c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:37 np0005601226 podman[250841]: 2026-01-29 17:19:37.214601147 +0000 UTC m=+0.106864826 container init 7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:19:37 np0005601226 podman[250841]: 2026-01-29 17:19:37.218775331 +0000 UTC m=+0.111039000 container start 7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:19:37 np0005601226 podman[250841]: 2026-01-29 17:19:37.123598565 +0000 UTC m=+0.015862254 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:19:37 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[250878]: [NOTICE]   (250884) : New worker (250887) forked
Jan 29 12:19:37 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[250878]: [NOTICE]   (250884) : Loading success.
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.264 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.266 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707177.2639785, dca948a3-675d-4cc4-a21b-c2f72cbe307e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.266 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] VM Started (Lifecycle Event)#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.271 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.275 239460 INFO nova.virt.libvirt.driver [-] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Instance spawned successfully.#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.276 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.288 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.293 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.297 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.297 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.298 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.298 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.298 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.299 239460 DEBUG nova.virt.libvirt.driver [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.320 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.321 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707177.2640839, dca948a3-675d-4cc4-a21b-c2f72cbe307e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.321 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.351 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.355 239460 INFO nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Took 7.88 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.355 239460 DEBUG nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.358 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707177.2682886, dca948a3-675d-4cc4-a21b-c2f72cbe307e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.358 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.381 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.384 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.404 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.416 239460 INFO nova.compute.manager [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Took 8.81 seconds to build instance.#033[00m
Jan 29 12:19:37 np0005601226 nova_compute[239456]: 2026-01-29 17:19:37.432 239460 DEBUG oslo_concurrency.lockutils [None req-735a9d4e-fd40-4617-915f-29874a86a4fb aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 840 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 32 MiB/s wr, 64 op/s
Jan 29 12:19:38 np0005601226 nova_compute[239456]: 2026-01-29 17:19:38.890 239460 DEBUG nova.compute.manager [req-0273f45e-3d37-407f-bee0-ba21cb281866 req-89c0168c-870a-4f45-a762-20f2f30710a4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:38 np0005601226 nova_compute[239456]: 2026-01-29 17:19:38.890 239460 DEBUG oslo_concurrency.lockutils [req-0273f45e-3d37-407f-bee0-ba21cb281866 req-89c0168c-870a-4f45-a762-20f2f30710a4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:38 np0005601226 nova_compute[239456]: 2026-01-29 17:19:38.891 239460 DEBUG oslo_concurrency.lockutils [req-0273f45e-3d37-407f-bee0-ba21cb281866 req-89c0168c-870a-4f45-a762-20f2f30710a4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:38 np0005601226 nova_compute[239456]: 2026-01-29 17:19:38.891 239460 DEBUG oslo_concurrency.lockutils [req-0273f45e-3d37-407f-bee0-ba21cb281866 req-89c0168c-870a-4f45-a762-20f2f30710a4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:38 np0005601226 nova_compute[239456]: 2026-01-29 17:19:38.891 239460 DEBUG nova.compute.manager [req-0273f45e-3d37-407f-bee0-ba21cb281866 req-89c0168c-870a-4f45-a762-20f2f30710a4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] No waiting events found dispatching network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:19:38 np0005601226 nova_compute[239456]: 2026-01-29 17:19:38.891 239460 WARNING nova.compute.manager [req-0273f45e-3d37-407f-bee0-ba21cb281866 req-89c0168c-870a-4f45-a762-20f2f30710a4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received unexpected event network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e for instance with vm_state active and task_state None.#033[00m
Jan 29 12:19:38 np0005601226 podman[250897]: 2026-01-29 17:19:38.909086719 +0000 UTC m=+0.077455553 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 29 12:19:38 np0005601226 podman[250896]: 2026-01-29 17:19:38.912912364 +0000 UTC m=+0.081749622 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 12:19:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 29 12:19:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 29 12:19:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 29 12:19:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 1.0 GiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 841 KiB/s rd, 44 MiB/s wr, 115 op/s
Jan 29 12:19:39 np0005601226 nova_compute[239456]: 2026-01-29 17:19:39.816 239460 DEBUG nova.compute.manager [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-changed-3af9ad9d-906f-4ed9-92cc-783df6775a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:19:39 np0005601226 nova_compute[239456]: 2026-01-29 17:19:39.816 239460 DEBUG nova.compute.manager [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Refreshing instance network info cache due to event network-changed-3af9ad9d-906f-4ed9-92cc-783df6775a8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:19:39 np0005601226 nova_compute[239456]: 2026-01-29 17:19:39.817 239460 DEBUG oslo_concurrency.lockutils [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:19:39 np0005601226 nova_compute[239456]: 2026-01-29 17:19:39.817 239460 DEBUG oslo_concurrency.lockutils [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:19:39 np0005601226 nova_compute[239456]: 2026-01-29 17:19:39.817 239460 DEBUG nova.network.neutron [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Refreshing network info cache for port 3af9ad9d-906f-4ed9-92cc-783df6775a8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:19:40 np0005601226 nova_compute[239456]: 2026-01-29 17:19:40.000 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:40.281 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:19:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:40.282 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:19:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:40.282 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:19:40
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:19:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:19:40 np0005601226 nova_compute[239456]: 2026-01-29 17:19:40.848 239460 DEBUG nova.network.neutron [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Updated VIF entry in instance network info cache for port 3af9ad9d-906f-4ed9-92cc-783df6775a8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:19:40 np0005601226 nova_compute[239456]: 2026-01-29 17:19:40.850 239460 DEBUG nova.network.neutron [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Updating instance_info_cache with network_info: [{"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:19:40 np0005601226 nova_compute[239456]: 2026-01-29 17:19:40.875 239460 DEBUG oslo_concurrency.lockutils [req-ba35c123-6571-4588-a029-7240a881fba9 req-a0b5a6a3-2782-47fd-a2b9-0e04327cc1f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-dca948a3-675d-4cc4-a21b-c2f72cbe307e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:19:40 np0005601226 nova_compute[239456]: 2026-01-29 17:19:40.912 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 29 12:19:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 29 12:19:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 29 12:19:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 48 MiB/s wr, 184 op/s
Jan 29 12:19:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:19:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276054850' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:19:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:19:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1276054850' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:19:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 736 MiB data, 878 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 31 MiB/s wr, 154 op/s
Jan 29 12:19:45 np0005601226 nova_compute[239456]: 2026-01-29 17:19:45.004 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 34 MiB/s wr, 226 op/s
Jan 29 12:19:45 np0005601226 nova_compute[239456]: 2026-01-29 17:19:45.914 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1520171914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1520171914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 29 12:19:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 29 12:19:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 9.0 MiB/s wr, 149 op/s
Jan 29 12:19:47 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:47Z|00067|binding|INFO|Releasing lport 56fcfe53-391b-4f05-a182-2812cd40a46e from this chassis (sb_readonly=0)
Jan 29 12:19:47 np0005601226 nova_compute[239456]: 2026-01-29 17:19:47.845 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:19:49 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:19:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 109 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 440 KiB/s rd, 5.8 MiB/s wr, 114 op/s
Jan 29 12:19:49 np0005601226 podman[251085]: 2026-01-29 17:19:49.712589226 +0000 UTC m=+0.038826220 container create 52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:49 np0005601226 systemd[1]: Started libpod-conmon-52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5.scope.
Jan 29 12:19:49 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:49 np0005601226 podman[251085]: 2026-01-29 17:19:49.691688466 +0000 UTC m=+0.017925490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:19:49 np0005601226 podman[251085]: 2026-01-29 17:19:49.80437257 +0000 UTC m=+0.130609574 container init 52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poitras, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:49 np0005601226 podman[251085]: 2026-01-29 17:19:49.811652629 +0000 UTC m=+0.137889613 container start 52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:49 np0005601226 vigilant_poitras[251101]: 167 167
Jan 29 12:19:49 np0005601226 systemd[1]: libpod-52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5.scope: Deactivated successfully.
Jan 29 12:19:49 np0005601226 conmon[251101]: conmon 52ec1ee95eba6b129570 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5.scope/container/memory.events
Jan 29 12:19:49 np0005601226 podman[251085]: 2026-01-29 17:19:49.824641023 +0000 UTC m=+0.150878017 container attach 52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poitras, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 29 12:19:49 np0005601226 podman[251085]: 2026-01-29 17:19:49.825555038 +0000 UTC m=+0.151792032 container died 52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:19:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-21b7bb8d9c15e0c60a3a4b9d14ae61821cc2bf5f27f4f85b2d78827cd6a335fe-merged.mount: Deactivated successfully.
Jan 29 12:19:49 np0005601226 podman[251085]: 2026-01-29 17:19:49.89200573 +0000 UTC m=+0.218242724 container remove 52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:49 np0005601226 systemd[1]: libpod-conmon-52ec1ee95eba6b12957062f24a81db0780877eb3e6df0d92e0cda7fc65ba75a5.scope: Deactivated successfully.
Jan 29 12:19:50 np0005601226 nova_compute[239456]: 2026-01-29 17:19:50.005 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:50 np0005601226 podman[251124]: 2026-01-29 17:19:50.041574991 +0000 UTC m=+0.067667027 container create 378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hellman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 12:19:50 np0005601226 podman[251124]: 2026-01-29 17:19:49.995104163 +0000 UTC m=+0.021196199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:19:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:50Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:bd:99 10.100.0.10
Jan 29 12:19:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:19:50Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:bd:99 10.100.0.10
Jan 29 12:19:50 np0005601226 systemd[1]: Started libpod-conmon-378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b.scope.
Jan 29 12:19:50 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44cb441efc49a0a29857a6e65ec0f945a88a54997987a10502bae7c401bb6b7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44cb441efc49a0a29857a6e65ec0f945a88a54997987a10502bae7c401bb6b7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44cb441efc49a0a29857a6e65ec0f945a88a54997987a10502bae7c401bb6b7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44cb441efc49a0a29857a6e65ec0f945a88a54997987a10502bae7c401bb6b7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44cb441efc49a0a29857a6e65ec0f945a88a54997987a10502bae7c401bb6b7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:50 np0005601226 podman[251124]: 2026-01-29 17:19:50.315157423 +0000 UTC m=+0.341249469 container init 378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hellman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 29 12:19:50 np0005601226 podman[251124]: 2026-01-29 17:19:50.320305034 +0000 UTC m=+0.346397110 container start 378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:19:50 np0005601226 podman[251124]: 2026-01-29 17:19:50.332121525 +0000 UTC m=+0.358213661 container attach 378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:19:50 np0005601226 romantic_hellman[251141]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:19:50 np0005601226 romantic_hellman[251141]: --> All data devices are unavailable
Jan 29 12:19:50 np0005601226 systemd[1]: libpod-378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b.scope: Deactivated successfully.
Jan 29 12:19:50 np0005601226 podman[251124]: 2026-01-29 17:19:50.730965365 +0000 UTC m=+0.757057411 container died 378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hellman, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:19:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-44cb441efc49a0a29857a6e65ec0f945a88a54997987a10502bae7c401bb6b7c-merged.mount: Deactivated successfully.
Jan 29 12:19:50 np0005601226 podman[251124]: 2026-01-29 17:19:50.831336454 +0000 UTC m=+0.857428490 container remove 378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hellman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:50 np0005601226 systemd[1]: libpod-conmon-378fd289e062de6f5d20eaddd697f8212fd8620f8dfdf3e9506c043eb270725b.scope: Deactivated successfully.
Jan 29 12:19:50 np0005601226 nova_compute[239456]: 2026-01-29 17:19:50.950 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:51 np0005601226 podman[251237]: 2026-01-29 17:19:51.275785517 +0000 UTC m=+0.040807794 container create cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:19:51 np0005601226 systemd[1]: Started libpod-conmon-cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f.scope.
Jan 29 12:19:51 np0005601226 podman[251237]: 2026-01-29 17:19:51.255463913 +0000 UTC m=+0.020486210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:19:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:51 np0005601226 podman[251237]: 2026-01-29 17:19:51.387406802 +0000 UTC m=+0.152429119 container init cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_jones, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:19:51 np0005601226 podman[251237]: 2026-01-29 17:19:51.395323098 +0000 UTC m=+0.160345375 container start cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_jones, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:19:51 np0005601226 dazzling_jones[251254]: 167 167
Jan 29 12:19:51 np0005601226 systemd[1]: libpod-cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f.scope: Deactivated successfully.
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007128007416459059 of space, bias 1.0, pg target 0.21384022249377177 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.895181536527011e-06 of space, bias 1.0, pg target 0.0017685544609581034 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2103348588108233e-07 of space, bias 1.0, pg target 3.63100457643247e-05 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660500828421984 of space, bias 1.0, pg target 0.19981502485265953 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1664920465829915e-06 of space, bias 4.0, pg target 0.00139979045589959 quantized to 16 (current 16)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:19:51 np0005601226 podman[251237]: 2026-01-29 17:19:51.460636619 +0000 UTC m=+0.225658926 container attach cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 12:19:51 np0005601226 podman[251237]: 2026-01-29 17:19:51.461342379 +0000 UTC m=+0.226364656 container died cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 12:19:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3a93d88320974434d63e7e50af53fb33e1c53ed174db322839510e606b7eed56-merged.mount: Deactivated successfully.
Jan 29 12:19:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 114 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 472 KiB/s rd, 4.9 MiB/s wr, 131 op/s
Jan 29 12:19:51 np0005601226 podman[251237]: 2026-01-29 17:19:51.679219871 +0000 UTC m=+0.444242158 container remove cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_jones, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:19:51 np0005601226 systemd[1]: libpod-conmon-cfdb362c7abc6d98fa2f659763191cfecc45f8b49a308cf26beace6def775d8f.scope: Deactivated successfully.
Jan 29 12:19:51 np0005601226 podman[251279]: 2026-01-29 17:19:51.801552369 +0000 UTC m=+0.041949395 container create 5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_visvesvaraya, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:19:51 np0005601226 systemd[1]: Started libpod-conmon-5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d.scope.
Jan 29 12:19:51 np0005601226 podman[251279]: 2026-01-29 17:19:51.780970277 +0000 UTC m=+0.021367323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:19:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e50ab4b610e7bdb45f6cc3011ed6c9297c6161292cae77d65455b1df71a8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e50ab4b610e7bdb45f6cc3011ed6c9297c6161292cae77d65455b1df71a8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e50ab4b610e7bdb45f6cc3011ed6c9297c6161292cae77d65455b1df71a8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6e50ab4b610e7bdb45f6cc3011ed6c9297c6161292cae77d65455b1df71a8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:51 np0005601226 podman[251279]: 2026-01-29 17:19:51.927620638 +0000 UTC m=+0.168017674 container init 5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:51 np0005601226 podman[251279]: 2026-01-29 17:19:51.933738194 +0000 UTC m=+0.174135230 container start 5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 12:19:52 np0005601226 podman[251279]: 2026-01-29 17:19:52.013952353 +0000 UTC m=+0.254349379 container attach 5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]: {
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:    "0": [
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:        {
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "devices": [
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "/dev/loop3"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            ],
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_name": "ceph_lv0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_size": "21470642176",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "name": "ceph_lv0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "tags": {
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cluster_name": "ceph",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.crush_device_class": "",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.encrypted": "0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.objectstore": "bluestore",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osd_id": "0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.type": "block",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.vdo": "0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.with_tpm": "0"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            },
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "type": "block",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "vg_name": "ceph_vg0"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:        }
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:    ],
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:    "1": [
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:        {
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "devices": [
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "/dev/loop4"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            ],
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_name": "ceph_lv1",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_size": "21470642176",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "name": "ceph_lv1",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "tags": {
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cluster_name": "ceph",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.crush_device_class": "",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.encrypted": "0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.objectstore": "bluestore",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osd_id": "1",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.type": "block",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.vdo": "0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.with_tpm": "0"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            },
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "type": "block",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "vg_name": "ceph_vg1"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:        }
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:    ],
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:    "2": [
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:        {
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "devices": [
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "/dev/loop5"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            ],
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_name": "ceph_lv2",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_size": "21470642176",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "name": "ceph_lv2",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "tags": {
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.cluster_name": "ceph",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.crush_device_class": "",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.encrypted": "0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.objectstore": "bluestore",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osd_id": "2",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.type": "block",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.vdo": "0",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:                "ceph.with_tpm": "0"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            },
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "type": "block",
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:            "vg_name": "ceph_vg2"
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:        }
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]:    ]
Jan 29 12:19:52 np0005601226 jolly_visvesvaraya[251296]: }
Jan 29 12:19:52 np0005601226 systemd[1]: libpod-5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d.scope: Deactivated successfully.
Jan 29 12:19:52 np0005601226 conmon[251296]: conmon 5a7e4c24df3cd528e3b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d.scope/container/memory.events
Jan 29 12:19:52 np0005601226 podman[251279]: 2026-01-29 17:19:52.219466318 +0000 UTC m=+0.459863344 container died 5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 12:19:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4d6e50ab4b610e7bdb45f6cc3011ed6c9297c6161292cae77d65455b1df71a8b-merged.mount: Deactivated successfully.
Jan 29 12:19:53 np0005601226 podman[251279]: 2026-01-29 17:19:53.091156766 +0000 UTC m=+1.331553792 container remove 5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:19:53 np0005601226 systemd[1]: libpod-conmon-5a7e4c24df3cd528e3b6f6800e710fee81a288e059fb29a77c8aca38dbb8b71d.scope: Deactivated successfully.
Jan 29 12:19:53 np0005601226 podman[251377]: 2026-01-29 17:19:53.479748566 +0000 UTC m=+0.036912488 container create cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:19:53 np0005601226 systemd[1]: Started libpod-conmon-cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa.scope.
Jan 29 12:19:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:53 np0005601226 podman[251377]: 2026-01-29 17:19:53.461967331 +0000 UTC m=+0.019131273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:19:53 np0005601226 podman[251377]: 2026-01-29 17:19:53.558899315 +0000 UTC m=+0.116063257 container init cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:19:53 np0005601226 podman[251377]: 2026-01-29 17:19:53.565474625 +0000 UTC m=+0.122638547 container start cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:19:53 np0005601226 ecstatic_germain[251393]: 167 167
Jan 29 12:19:53 np0005601226 systemd[1]: libpod-cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa.scope: Deactivated successfully.
Jan 29 12:19:53 np0005601226 podman[251377]: 2026-01-29 17:19:53.574404198 +0000 UTC m=+0.131568120 container attach cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:19:53 np0005601226 podman[251377]: 2026-01-29 17:19:53.575153239 +0000 UTC m=+0.132317151 container died cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ed0a49b4cd60222fad40ea7b344bf7d33562ab2105ee93be03eaaf9a9a5abcef-merged.mount: Deactivated successfully.
Jan 29 12:19:53 np0005601226 podman[251377]: 2026-01-29 17:19:53.618617685 +0000 UTC m=+0.175781607 container remove cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_germain, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 12:19:53 np0005601226 systemd[1]: libpod-conmon-cc49af1aa4970ec492a496b95a7e471c622e96fac18a6cf7e39cb5de556a1dfa.scope: Deactivated successfully.
Jan 29 12:19:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 116 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 410 KiB/s rd, 4.9 MiB/s wr, 130 op/s
Jan 29 12:19:53 np0005601226 podman[251418]: 2026-01-29 17:19:53.811628369 +0000 UTC m=+0.098438836 container create 186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:19:53 np0005601226 podman[251418]: 2026-01-29 17:19:53.729982602 +0000 UTC m=+0.016793089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:19:53 np0005601226 systemd[1]: Started libpod-conmon-186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca.scope.
Jan 29 12:19:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:19:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa5ea46f241867b5f011adb15e069a1f424198e5b72d52846c68ced5e5434a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa5ea46f241867b5f011adb15e069a1f424198e5b72d52846c68ced5e5434a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa5ea46f241867b5f011adb15e069a1f424198e5b72d52846c68ced5e5434a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fa5ea46f241867b5f011adb15e069a1f424198e5b72d52846c68ced5e5434a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:19:53 np0005601226 podman[251418]: 2026-01-29 17:19:53.877053744 +0000 UTC m=+0.163864221 container init 186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_feynman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:19:53 np0005601226 podman[251418]: 2026-01-29 17:19:53.882067481 +0000 UTC m=+0.168877948 container start 186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_feynman, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:19:53 np0005601226 podman[251418]: 2026-01-29 17:19:53.885398301 +0000 UTC m=+0.172208768 container attach 186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 12:19:54 np0005601226 nova_compute[239456]: 2026-01-29 17:19:54.495 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:54 np0005601226 lvm[251512]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:19:54 np0005601226 lvm[251513]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:19:54 np0005601226 lvm[251513]: VG ceph_vg1 finished
Jan 29 12:19:54 np0005601226 lvm[251512]: VG ceph_vg0 finished
Jan 29 12:19:54 np0005601226 lvm[251515]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:19:54 np0005601226 lvm[251515]: VG ceph_vg2 finished
Jan 29 12:19:54 np0005601226 suspicious_feynman[251434]: {}
Jan 29 12:19:54 np0005601226 systemd[1]: libpod-186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca.scope: Deactivated successfully.
Jan 29 12:19:54 np0005601226 systemd[1]: libpod-186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca.scope: Consumed 1.074s CPU time.
Jan 29 12:19:54 np0005601226 podman[251418]: 2026-01-29 17:19:54.652421144 +0000 UTC m=+0.939231621 container died 186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_feynman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:19:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7fa5ea46f241867b5f011adb15e069a1f424198e5b72d52846c68ced5e5434a7-merged.mount: Deactivated successfully.
Jan 29 12:19:54 np0005601226 podman[251418]: 2026-01-29 17:19:54.719878264 +0000 UTC m=+1.006688731 container remove 186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:19:54 np0005601226 systemd[1]: libpod-conmon-186e117b083c59140e1b007f22f0d00e0683b1a9ac7bde990463d93912787dca.scope: Deactivated successfully.
Jan 29 12:19:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:19:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:19:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:19:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:19:55 np0005601226 nova_compute[239456]: 2026-01-29 17:19:55.007 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:55 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:19:55 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:19:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:55.210 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:19:55 np0005601226 nova_compute[239456]: 2026-01-29 17:19:55.211 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:19:55.211 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:19:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 396 KiB/s rd, 2.6 MiB/s wr, 76 op/s
Jan 29 12:19:55 np0005601226 nova_compute[239456]: 2026-01-29 17:19:55.953 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:56 np0005601226 nova_compute[239456]: 2026-01-29 17:19:56.536 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:19:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:19:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.5 MiB/s wr, 75 op/s
Jan 29 12:19:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 29 12:20:00 np0005601226 nova_compute[239456]: 2026-01-29 17:20:00.010 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:00 np0005601226 nova_compute[239456]: 2026-01-29 17:20:00.956 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.656 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.657 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 280 KiB/s wr, 47 op/s
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.686 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.759 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.759 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.769 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.769 239460 INFO nova.compute.claims [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:20:01 np0005601226 nova_compute[239456]: 2026-01-29 17:20:01.889 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:20:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3153256208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.422 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.427 239460 DEBUG nova.compute.provider_tree [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.442 239460 DEBUG nova.scheduler.client.report [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.463 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.463 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.517 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.517 239460 DEBUG nova.network.neutron [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:20:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.538 239460 INFO nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.556 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.658 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.660 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.660 239460 INFO nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Creating image(s)#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.684 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.704 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.723 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.726 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.773 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.774 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.774 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.774 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.792 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:20:02 np0005601226 nova_compute[239456]: 2026-01-29 17:20:02.795 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:03 np0005601226 nova_compute[239456]: 2026-01-29 17:20:03.492 239460 DEBUG nova.policy [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3463a84af564b968e67b687bc895548', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '815af3cf993b45cc8f2cdf73bf1d552c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:20:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 38 KiB/s wr, 14 op/s
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.528 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.581 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] resizing rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.756 239460 DEBUG nova.objects.instance [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'migration_context' on Instance uuid d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.769 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.769 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Ensure instance console log exists: /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.770 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.770 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.770 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:04 np0005601226 nova_compute[239456]: 2026-01-29 17:20:04.925 239460 DEBUG nova.network.neutron [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Successfully created port: 6be42760-adf3-45d0-ae0d-44d988848eb0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:20:05 np0005601226 nova_compute[239456]: 2026-01-29 17:20:05.013 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:05.214 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 145 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 827 KiB/s wr, 17 op/s
Jan 29 12:20:05 np0005601226 nova_compute[239456]: 2026-01-29 17:20:05.958 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:06 np0005601226 nova_compute[239456]: 2026-01-29 17:20:06.533 239460 DEBUG nova.network.neutron [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Successfully updated port: 6be42760-adf3-45d0-ae0d-44d988848eb0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:20:06 np0005601226 nova_compute[239456]: 2026-01-29 17:20:06.548 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:20:06 np0005601226 nova_compute[239456]: 2026-01-29 17:20:06.548 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquired lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:20:06 np0005601226 nova_compute[239456]: 2026-01-29 17:20:06.548 239460 DEBUG nova.network.neutron [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:20:06 np0005601226 nova_compute[239456]: 2026-01-29 17:20:06.725 239460 DEBUG nova.compute.manager [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-changed-6be42760-adf3-45d0-ae0d-44d988848eb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:06 np0005601226 nova_compute[239456]: 2026-01-29 17:20:06.725 239460 DEBUG nova.compute.manager [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Refreshing instance network info cache due to event network-changed-6be42760-adf3-45d0-ae0d-44d988848eb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:20:06 np0005601226 nova_compute[239456]: 2026-01-29 17:20:06.725 239460 DEBUG oslo_concurrency.lockutils [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:20:07 np0005601226 nova_compute[239456]: 2026-01-29 17:20:07.272 239460 DEBUG nova.network.neutron [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:20:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 145 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 807 KiB/s wr, 14 op/s
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.264 239460 DEBUG nova.network.neutron [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updating instance_info_cache with network_info: [{"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.281 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Releasing lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.282 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Instance network_info: |[{"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.282 239460 DEBUG oslo_concurrency.lockutils [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.282 239460 DEBUG nova.network.neutron [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Refreshing network info cache for port 6be42760-adf3-45d0-ae0d-44d988848eb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.284 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Start _get_guest_xml network_info=[{"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.289 239460 WARNING nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.293 239460 DEBUG nova.virt.libvirt.host [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.293 239460 DEBUG nova.virt.libvirt.host [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.299 239460 DEBUG nova.virt.libvirt.host [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.300 239460 DEBUG nova.virt.libvirt.host [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.300 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.300 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.301 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.301 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.301 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.301 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.301 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.302 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.302 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.302 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.302 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.302 239460 DEBUG nova.virt.hardware [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.305 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:20:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039635645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.821 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.841 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:20:08 np0005601226 nova_compute[239456]: 2026-01-29 17:20:08.844 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:20:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047130549' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.408 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.410 239460 DEBUG nova.virt.libvirt.vif [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:20:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1126750815',display_name='tempest-VolumesBackupsTest-instance-1126750815',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1126750815',id=6,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzDsCY+7iLKyxKR/RPqyhuejs3RxupkCpjwrcLLN6bwiFn7asDIiuGZ3fgfzWQBWbR6PuAecg7zh1hlNNafsXWsMe0hZXYH/C8lEs9aP+WdD0oobkGb2HMs4pRlFxTogQ==',key_name='tempest-keypair-2099588373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='815af3cf993b45cc8f2cdf73bf1d552c',ramdisk_id='',reservation_id='r-1fm2umih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-2142983406',owner_user_name='tempest-VolumesBackupsTest-2142983406-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:20:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3463a84af564b968e67b687bc895548',uuid=d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.410 239460 DEBUG nova.network.os_vif_util [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converting VIF {"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.411 239460 DEBUG nova.network.os_vif_util [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4b:3a,bridge_name='br-int',has_traffic_filtering=True,id=6be42760-adf3-45d0-ae0d-44d988848eb0,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6be42760-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.412 239460 DEBUG nova.objects.instance [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'pci_devices' on Instance uuid d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.425 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <uuid>d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455</uuid>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <name>instance-00000006</name>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesBackupsTest-instance-1126750815</nova:name>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:20:08</nova:creationTime>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:user uuid="d3463a84af564b968e67b687bc895548">tempest-VolumesBackupsTest-2142983406-project-member</nova:user>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:project uuid="815af3cf993b45cc8f2cdf73bf1d552c">tempest-VolumesBackupsTest-2142983406</nova:project>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <nova:port uuid="6be42760-adf3-45d0-ae0d-44d988848eb0">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <entry name="serial">d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455</entry>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <entry name="uuid">d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455</entry>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk.config">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:8f:4b:3a"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <target dev="tap6be42760-ad"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/console.log" append="off"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:20:09 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:20:09 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:20:09 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:20:09 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.426 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Preparing to wait for external event network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.427 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.427 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.427 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.428 239460 DEBUG nova.virt.libvirt.vif [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:20:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1126750815',display_name='tempest-VolumesBackupsTest-instance-1126750815',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1126750815',id=6,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzDsCY+7iLKyxKR/RPqyhuejs3RxupkCpjwrcLLN6bwiFn7asDIiuGZ3fgfzWQBWbR6PuAecg7zh1hlNNafsXWsMe0hZXYH/C8lEs9aP+WdD0oobkGb2HMs4pRlFxTogQ==',key_name='tempest-keypair-2099588373',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='815af3cf993b45cc8f2cdf73bf1d552c',ramdisk_id='',reservation_id='r-1fm2umih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-2142983406',owner_user_name='tempest-VolumesBackupsTest-2142983406-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:20:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3463a84af564b968e67b687bc895548',uuid=d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.428 239460 DEBUG nova.network.os_vif_util [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converting VIF {"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.429 239460 DEBUG nova.network.os_vif_util [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4b:3a,bridge_name='br-int',has_traffic_filtering=True,id=6be42760-adf3-45d0-ae0d-44d988848eb0,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6be42760-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.429 239460 DEBUG os_vif [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4b:3a,bridge_name='br-int',has_traffic_filtering=True,id=6be42760-adf3-45d0-ae0d-44d988848eb0,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6be42760-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.430 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.430 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.431 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.434 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.434 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6be42760-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.434 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6be42760-ad, col_values=(('external_ids', {'iface-id': '6be42760-adf3-45d0-ae0d-44d988848eb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:4b:3a', 'vm-uuid': 'd64d6fd1-4f7b-4765-8b1c-1b7e6d42c455'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.436 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:09 np0005601226 NetworkManager[49020]: <info>  [1769707209.4371] manager: (tap6be42760-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.438 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.442 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.443 239460 INFO os_vif [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:4b:3a,bridge_name='br-int',has_traffic_filtering=True,id=6be42760-adf3-45d0-ae0d-44d988848eb0,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6be42760-ad')#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.462 239460 DEBUG nova.network.neutron [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updated VIF entry in instance network info cache for port 6be42760-adf3-45d0-ae0d-44d988848eb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.462 239460 DEBUG nova.network.neutron [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updating instance_info_cache with network_info: [{"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.476 239460 DEBUG oslo_concurrency.lockutils [req-2a88f234-5c90-42ca-898a-1288216cde74 req-ace83d3c-e608-41ec-b735-cf38b5946923 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.510 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.510 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.510 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No VIF found with MAC fa:16:3e:8f:4b:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.511 239460 INFO nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Using config drive#033[00m
Jan 29 12:20:09 np0005601226 podman[251811]: 2026-01-29 17:20:09.523872927 +0000 UTC m=+0.049727187 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.531 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:20:09 np0005601226 podman[251812]: 2026-01-29 17:20:09.568991268 +0000 UTC m=+0.093184473 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:20:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.783 239460 INFO nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Creating config drive at /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/disk.config#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.788 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrhweko5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.909 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnrhweko5" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.934 239460 DEBUG nova.storage.rbd_utils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:20:09 np0005601226 nova_compute[239456]: 2026-01-29 17:20:09.937 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/disk.config d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.414 239460 DEBUG oslo_concurrency.processutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/disk.config d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.415 239460 INFO nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Deleting local config drive /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455/disk.config because it was imported into RBD.#033[00m
Jan 29 12:20:10 np0005601226 kernel: tap6be42760-ad: entered promiscuous mode
Jan 29 12:20:10 np0005601226 NetworkManager[49020]: <info>  [1769707210.4467] manager: (tap6be42760-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 29 12:20:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:10Z|00068|binding|INFO|Claiming lport 6be42760-adf3-45d0-ae0d-44d988848eb0 for this chassis.
Jan 29 12:20:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:10Z|00069|binding|INFO|6be42760-adf3-45d0-ae0d-44d988848eb0: Claiming fa:16:3e:8f:4b:3a 10.100.0.6
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.447 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.455 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:10Z|00070|binding|INFO|Setting lport 6be42760-adf3-45d0-ae0d-44d988848eb0 ovn-installed in OVS
Jan 29 12:20:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:10Z|00071|binding|INFO|Setting lport 6be42760-adf3-45d0-ae0d-44d988848eb0 up in Southbound
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.454 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:4b:3a 10.100.0.6'], port_security=['fa:16:3e:8f:4b:3a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd64d6fd1-4f7b-4765-8b1c-1b7e6d42c455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815af3cf993b45cc8f2cdf73bf1d552c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08229a17-4a48-4b26-bd20-8db0c8a3185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ddf8c3b-2084-4923-8e76-31ca07b64cbd, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=6be42760-adf3-45d0-ae0d-44d988848eb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.455 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 6be42760-adf3-45d0-ae0d-44d988848eb0 in datapath 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 bound to our chassis#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.457 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.465 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed2220c-6fd1-4d8b-b1a6-ea97bc83ec72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.466 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap765ab7c4-f1 in ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:20:10 np0005601226 systemd-udevd[251927]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.468 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap765ab7c4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.468 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5f02b993-5ecd-4541-9d7f-ef2d22dfdcfb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 systemd-machined[207561]: New machine qemu-6-instance-00000006.
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.470 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d077b5fd-5089-4033-8da8-da71e1c07f32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 NetworkManager[49020]: <info>  [1769707210.4790] device (tap6be42760-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:20:10 np0005601226 NetworkManager[49020]: <info>  [1769707210.4797] device (tap6be42760-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.479 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[9e069d07-954c-461e-9393-bf9453ff8b44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.491 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a63a2b4d-3925-4082-8bdb-506d17165713]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.512 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[164426f2-1c9d-434a-ad65-758bdb485d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.518 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[73a69714-25db-4df0-bfee-38d3a4924c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 NetworkManager[49020]: <info>  [1769707210.5189] manager: (tap765ab7c4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.541 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[85fb8533-de00-455e-84aa-81186dfdfacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.543 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd5ca25-e4b8-4d0d-bab8-45a17bc2c391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 NetworkManager[49020]: <info>  [1769707210.5565] device (tap765ab7c4-f0): carrier: link connected
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.560 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[947d8930-ed88-436f-b5f6-c814b40045d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.572 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[16afd799-54a4-4375-b634-5a1bb043fd95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap765ab7c4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:73:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456405, 'reachable_time': 41050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251960, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:20:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:20:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:20:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:20:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:20:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.585 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[dc19550d-c652-4d74-b309-abfd679f6880]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:73dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456405, 'tstamp': 456405}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251961, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.599 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5466bad8-dd5a-4431-be0a-52acf1ee6582]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap765ab7c4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:73:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456405, 'reachable_time': 41050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251962, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.625 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[36174e4b-c5bf-4e57-b207-85b8b8ba2733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.664 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd4e97d-75e2-4ffd-aef9-35206ed69993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.666 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap765ab7c4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.666 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.666 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap765ab7c4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:10 np0005601226 kernel: tap765ab7c4-f0: entered promiscuous mode
Jan 29 12:20:10 np0005601226 NetworkManager[49020]: <info>  [1769707210.6691] manager: (tap765ab7c4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.669 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.672 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap765ab7c4-f0, col_values=(('external_ids', {'iface-id': '07f2e2bc-3dba-4506-9241-0e092dfbeda9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:10Z|00072|binding|INFO|Releasing lport 07f2e2bc-3dba-4506-9241-0e092dfbeda9 from this chassis (sb_readonly=0)
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.673 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.675 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.678 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.679 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1ee3bd-79d4-4e7b-b7aa-31fa6d8106ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.680 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.pid.haproxy
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:20:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:10.681 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'env', 'PROCESS_TAG=haproxy-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.724 239460 DEBUG nova.compute.manager [req-30393d06-bada-42e1-8f25-c3b2c9e7075e req-92cf6eda-6649-4e0a-bb99-1d0d20b8f6a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.724 239460 DEBUG oslo_concurrency.lockutils [req-30393d06-bada-42e1-8f25-c3b2c9e7075e req-92cf6eda-6649-4e0a-bb99-1d0d20b8f6a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.725 239460 DEBUG oslo_concurrency.lockutils [req-30393d06-bada-42e1-8f25-c3b2c9e7075e req-92cf6eda-6649-4e0a-bb99-1d0d20b8f6a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.725 239460 DEBUG oslo_concurrency.lockutils [req-30393d06-bada-42e1-8f25-c3b2c9e7075e req-92cf6eda-6649-4e0a-bb99-1d0d20b8f6a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.725 239460 DEBUG nova.compute.manager [req-30393d06-bada-42e1-8f25-c3b2c9e7075e req-92cf6eda-6649-4e0a-bb99-1d0d20b8f6a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Processing event network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:20:10 np0005601226 nova_compute[239456]: 2026-01-29 17:20:10.960 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:11 np0005601226 podman[251994]: 2026-01-29 17:20:10.956191277 +0000 UTC m=+0.015657948 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:20:11 np0005601226 podman[251994]: 2026-01-29 17:20:11.085491285 +0000 UTC m=+0.144957936 container create aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 29 12:20:11 np0005601226 systemd[1]: Started libpod-conmon-aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619.scope.
Jan 29 12:20:11 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:20:11 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213c63dfbedd898219df26a3552dda46baf1b98afc2992dd32493a5268a83008/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:20:11 np0005601226 podman[251994]: 2026-01-29 17:20:11.189625515 +0000 UTC m=+0.249092196 container init aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true)
Jan 29 12:20:11 np0005601226 podman[251994]: 2026-01-29 17:20:11.193435889 +0000 UTC m=+0.252902540 container start aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:20:11 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[252024]: [NOTICE]   (252049) : New worker (252055) forked
Jan 29 12:20:11 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[252024]: [NOTICE]   (252049) : Loading success.
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.281 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707211.2810876, d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.281 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] VM Started (Lifecycle Event)#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.284 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.287 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.290 239460 INFO nova.virt.libvirt.driver [-] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Instance spawned successfully.#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.290 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.304 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.309 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.313 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.313 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.313 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.314 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.314 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.314 239460 DEBUG nova.virt.libvirt.driver [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.336 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.336 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707211.2819011, d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.336 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.360 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.363 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707211.2867615, d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.363 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.369 239460 INFO nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Took 8.71 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.370 239460 DEBUG nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.383 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.385 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.418 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.439 239460 INFO nova.compute.manager [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Took 9.71 seconds to build instance.#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.455 239460 DEBUG oslo_concurrency.lockutils [None req-e403b5b7-5236-4927-9aa1-1394dccd9981 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.871 239460 DEBUG oslo_concurrency.lockutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.871 239460 DEBUG oslo_concurrency.lockutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.893 239460 DEBUG nova.objects.instance [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'flavor' on Instance uuid dca948a3-675d-4cc4-a21b-c2f72cbe307e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.919 239460 INFO nova.virt.libvirt.driver [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Ignoring supplied device name: /dev/vdb#033[00m
Jan 29 12:20:11 np0005601226 nova_compute[239456]: 2026-01-29 17:20:11.935 239460 DEBUG oslo_concurrency.lockutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.275 239460 DEBUG oslo_concurrency.lockutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.275 239460 DEBUG oslo_concurrency.lockutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.275 239460 INFO nova.compute.manager [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Attaching volume 9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c to /dev/vdb#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.497 239460 DEBUG os_brick.utils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.498 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.511 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.511 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[612e9647-fe71-4e0b-aec8-c3c1332ff059]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.513 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.520 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.520 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5a1438-58ed-4774-983d-29a573adaf56]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.522 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.528 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.528 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[e338a901-f799-4fde-adbe-e31aa9c5c555]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.530 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1bec7d-c0a4-4c18-ab39-85342476bc59]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.530 239460 DEBUG oslo_concurrency.processutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.543 239460 DEBUG oslo_concurrency.processutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "nvme version" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.545 239460 DEBUG os_brick.initiator.connectors.lightos [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.545 239460 DEBUG os_brick.initiator.connectors.lightos [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.545 239460 DEBUG os_brick.initiator.connectors.lightos [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.546 239460 DEBUG os_brick.utils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] <== get_connector_properties: return (48ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.546 239460 DEBUG nova.virt.block_device [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Updating existing volume attachment record: 7b497212-8e97-447e-8737-d4bca8213809 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.794 239460 DEBUG nova.compute.manager [req-ae5dfd03-aeab-4639-81bc-757563a439ec req-2d54a459-ba39-4f38-baa7-4af81c25871d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.795 239460 DEBUG oslo_concurrency.lockutils [req-ae5dfd03-aeab-4639-81bc-757563a439ec req-2d54a459-ba39-4f38-baa7-4af81c25871d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.795 239460 DEBUG oslo_concurrency.lockutils [req-ae5dfd03-aeab-4639-81bc-757563a439ec req-2d54a459-ba39-4f38-baa7-4af81c25871d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.795 239460 DEBUG oslo_concurrency.lockutils [req-ae5dfd03-aeab-4639-81bc-757563a439ec req-2d54a459-ba39-4f38-baa7-4af81c25871d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.795 239460 DEBUG nova.compute.manager [req-ae5dfd03-aeab-4639-81bc-757563a439ec req-2d54a459-ba39-4f38-baa7-4af81c25871d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] No waiting events found dispatching network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:20:12 np0005601226 nova_compute[239456]: 2026-01-29 17:20:12.796 239460 WARNING nova.compute.manager [req-ae5dfd03-aeab-4639-81bc-757563a439ec req-2d54a459-ba39-4f38-baa7-4af81c25871d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received unexpected event network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:20:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:20:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2467800633' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.398 239460 DEBUG nova.compute.manager [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-changed-6be42760-adf3-45d0-ae0d-44d988848eb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.398 239460 DEBUG nova.compute.manager [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Refreshing instance network info cache due to event network-changed-6be42760-adf3-45d0-ae0d-44d988848eb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.399 239460 DEBUG oslo_concurrency.lockutils [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.399 239460 DEBUG oslo_concurrency.lockutils [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.399 239460 DEBUG nova.network.neutron [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Refreshing network info cache for port 6be42760-adf3-45d0-ae0d-44d988848eb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.411 239460 DEBUG nova.objects.instance [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'flavor' on Instance uuid dca948a3-675d-4cc4-a21b-c2f72cbe307e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.446 239460 DEBUG nova.virt.libvirt.driver [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Attempting to attach volume 9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.448 239460 DEBUG nova.virt.libvirt.guest [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:20:13 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:20:13 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c">
Jan 29 12:20:13 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:13 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:20:13 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:20:13 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:20:13 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:20:13 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:20:13 np0005601226 nova_compute[239456]:  <serial>9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c</serial>
Jan 29 12:20:13 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:20:13 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:20:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.754 239460 DEBUG nova.virt.libvirt.driver [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.755 239460 DEBUG nova.virt.libvirt.driver [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.755 239460 DEBUG nova.virt.libvirt.driver [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:13 np0005601226 nova_compute[239456]: 2026-01-29 17:20:13.755 239460 DEBUG nova.virt.libvirt.driver [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] No VIF found with MAC fa:16:3e:a8:bd:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:20:14 np0005601226 nova_compute[239456]: 2026-01-29 17:20:14.009 239460 DEBUG oslo_concurrency.lockutils [None req-6ce5b635-ff93-4c63-9c6c-8624b88a2001 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:14 np0005601226 nova_compute[239456]: 2026-01-29 17:20:14.436 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:14 np0005601226 nova_compute[239456]: 2026-01-29 17:20:14.847 239460 DEBUG nova.network.neutron [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updated VIF entry in instance network info cache for port 6be42760-adf3-45d0-ae0d-44d988848eb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:20:14 np0005601226 nova_compute[239456]: 2026-01-29 17:20:14.847 239460 DEBUG nova.network.neutron [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updating instance_info_cache with network_info: [{"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:20:14 np0005601226 nova_compute[239456]: 2026-01-29 17:20:14.879 239460 DEBUG oslo_concurrency.lockutils [req-bbef9055-b587-43f3-8876-e4fbb90e2223 req-d5836cb6-899f-468e-8644-c8ab39c69263 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:20:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 29 12:20:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 29 12:20:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 29 12:20:15 np0005601226 nova_compute[239456]: 2026-01-29 17:20:15.963 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:16 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 29 12:20:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 112 op/s
Jan 29 12:20:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 29 12:20:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 29 12:20:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 29 12:20:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 29 12:20:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 29 12:20:19 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 29 12:20:19 np0005601226 nova_compute[239456]: 2026-01-29 17:20:19.438 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.5 KiB/s wr, 82 op/s
Jan 29 12:20:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1798566803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1798566803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 29 12:20:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 29 12:20:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 29 12:20:20 np0005601226 nova_compute[239456]: 2026-01-29 17:20:20.964 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 7.8 KiB/s wr, 76 op/s
Jan 29 12:20:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:23 np0005601226 nova_compute[239456]: 2026-01-29 17:20:23.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:23 np0005601226 nova_compute[239456]: 2026-01-29 17:20:23.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:23 np0005601226 nova_compute[239456]: 2026-01-29 17:20:23.633 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:23 np0005601226 nova_compute[239456]: 2026-01-29 17:20:23.634 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:23 np0005601226 nova_compute[239456]: 2026-01-29 17:20:23.634 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:23 np0005601226 nova_compute[239456]: 2026-01-29 17:20:23.634 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:20:23 np0005601226 nova_compute[239456]: 2026-01-29 17:20:23.635 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 174 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 1.0 MiB/s wr, 86 op/s
Jan 29 12:20:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 29 12:20:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 29 12:20:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 29 12:20:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:20:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945144886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.160 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.223 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.223 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.225 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.226 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.226 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.355 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.356 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4282MB free_disk=59.91599634569138GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.356 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.357 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.422 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance dca948a3-675d-4cc4-a21b-c2f72cbe307e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.422 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.423 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.423 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.440 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.470 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:24 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:24Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:4b:3a 10.100.0.6
Jan 29 12:20:24 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:24Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:4b:3a 10.100.0.6
Jan 29 12:20:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:20:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758440587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.958 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:24 np0005601226 nova_compute[239456]: 2026-01-29 17:20:24.963 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:20:25 np0005601226 nova_compute[239456]: 2026-01-29 17:20:25.109 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:20:25 np0005601226 nova_compute[239456]: 2026-01-29 17:20:25.175 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:20:25 np0005601226 nova_compute[239456]: 2026-01-29 17:20:25.176 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 192 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 3.8 MiB/s wr, 178 op/s
Jan 29 12:20:25 np0005601226 nova_compute[239456]: 2026-01-29 17:20:25.965 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.119 239460 DEBUG oslo_concurrency.lockutils [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.120 239460 DEBUG oslo_concurrency.lockutils [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.144 239460 INFO nova.compute.manager [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Detaching volume 9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c#033[00m
Jan 29 12:20:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2390403442' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2390403442' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.288 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.290 239460 INFO nova.virt.block_device [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Attempting to driver detach volume 9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c from mountpoint /dev/vdb#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.298 239460 DEBUG nova.virt.libvirt.driver [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Attempting to detach device vdb from instance dca948a3-675d-4cc4-a21b-c2f72cbe307e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.299 239460 DEBUG nova.virt.libvirt.guest [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c">
Jan 29 12:20:26 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <serial>9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c</serial>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:20:26 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.309 239460 INFO nova.virt.libvirt.driver [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully detached device vdb from instance dca948a3-675d-4cc4-a21b-c2f72cbe307e from the persistent domain config.#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.309 239460 DEBUG nova.virt.libvirt.driver [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance dca948a3-675d-4cc4-a21b-c2f72cbe307e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.309 239460 DEBUG nova.virt.libvirt.guest [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c">
Jan 29 12:20:26 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <serial>9a93ba0f-8e9c-45ea-b945-66d0b6b9a61c</serial>
Jan 29 12:20:26 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:20:26 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:20:26 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.433 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707226.433183, dca948a3-675d-4cc4-a21b-c2f72cbe307e => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.434 239460 DEBUG nova.virt.libvirt.driver [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance dca948a3-675d-4cc4-a21b-c2f72cbe307e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.436 239460 INFO nova.virt.libvirt.driver [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully detached device vdb from instance dca948a3-675d-4cc4-a21b-c2f72cbe307e from the live domain config.#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.626 239460 DEBUG nova.objects.instance [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'flavor' on Instance uuid dca948a3-675d-4cc4-a21b-c2f72cbe307e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.700 239460 DEBUG oslo_concurrency.lockutils [None req-5bf7d2fe-2d2d-4f1b-8557-eee1a66c9459 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.701 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.702 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.702 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.702 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.703 239460 INFO nova.compute.manager [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Terminating instance#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.704 239460 DEBUG nova.compute.manager [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:20:26 np0005601226 kernel: tap3af9ad9d-90 (unregistering): left promiscuous mode
Jan 29 12:20:26 np0005601226 NetworkManager[49020]: <info>  [1769707226.8947] device (tap3af9ad9d-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:20:26 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:26Z|00073|binding|INFO|Releasing lport 3af9ad9d-906f-4ed9-92cc-783df6775a8e from this chassis (sb_readonly=0)
Jan 29 12:20:26 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:26Z|00074|binding|INFO|Setting lport 3af9ad9d-906f-4ed9-92cc-783df6775a8e down in Southbound
Jan 29 12:20:26 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:26Z|00075|binding|INFO|Removing iface tap3af9ad9d-90 ovn-installed in OVS
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.942 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:26 np0005601226 nova_compute[239456]: 2026-01-29 17:20:26.947 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:26 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:26.950 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:bd:99 10.100.0.10'], port_security=['fa:16:3e:a8:bd:99 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'dca948a3-675d-4cc4-a21b-c2f72cbe307e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c65d65e6-04af-4892-ad96-3d83d148450f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7140162c4cd744d38e65ad1bcdadf016', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a54deeaf-8fce-4574-9b38-5606fce0457a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5702b8d-5b0f-4c7d-bc4d-4e202a7e2b31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3af9ad9d-906f-4ed9-92cc-783df6775a8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:20:26 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:26.951 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3af9ad9d-906f-4ed9-92cc-783df6775a8e in datapath c65d65e6-04af-4892-ad96-3d83d148450f unbound from our chassis#033[00m
Jan 29 12:20:26 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:26.953 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c65d65e6-04af-4892-ad96-3d83d148450f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:20:26 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:26.953 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a107c9-5ea1-43fb-b44e-3dce2ce8ec2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:26 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:26.954 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f namespace which is not needed anymore#033[00m
Jan 29 12:20:26 np0005601226 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 29 12:20:26 np0005601226 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 13.274s CPU time.
Jan 29 12:20:26 np0005601226 systemd-machined[207561]: Machine qemu-5-instance-00000005 terminated.
Jan 29 12:20:27 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[250878]: [NOTICE]   (250884) : haproxy version is 2.8.14-c23fe91
Jan 29 12:20:27 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[250878]: [NOTICE]   (250884) : path to executable is /usr/sbin/haproxy
Jan 29 12:20:27 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[250878]: [WARNING]  (250884) : Exiting Master process...
Jan 29 12:20:27 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[250878]: [ALERT]    (250884) : Current worker (250887) exited with code 143 (Terminated)
Jan 29 12:20:27 np0005601226 neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f[250878]: [WARNING]  (250884) : All workers exited. Exiting... (0)
Jan 29 12:20:27 np0005601226 systemd[1]: libpod-7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f.scope: Deactivated successfully.
Jan 29 12:20:27 np0005601226 podman[252164]: 2026-01-29 17:20:27.109816155 +0000 UTC m=+0.091789714 container died 7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.141 239460 INFO nova.virt.libvirt.driver [-] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Instance destroyed successfully.#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.143 239460 DEBUG nova.objects.instance [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lazy-loading 'resources' on Instance uuid dca948a3-675d-4cc4-a21b-c2f72cbe307e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.164 239460 DEBUG nova.compute.manager [req-e8f31f7a-43fc-4656-9374-fcd59b403422 req-412b0105-46f2-4612-af7c-239759b691b0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-vif-unplugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.165 239460 DEBUG oslo_concurrency.lockutils [req-e8f31f7a-43fc-4656-9374-fcd59b403422 req-412b0105-46f2-4612-af7c-239759b691b0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.166 239460 DEBUG oslo_concurrency.lockutils [req-e8f31f7a-43fc-4656-9374-fcd59b403422 req-412b0105-46f2-4612-af7c-239759b691b0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.166 239460 DEBUG oslo_concurrency.lockutils [req-e8f31f7a-43fc-4656-9374-fcd59b403422 req-412b0105-46f2-4612-af7c-239759b691b0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.166 239460 DEBUG nova.compute.manager [req-e8f31f7a-43fc-4656-9374-fcd59b403422 req-412b0105-46f2-4612-af7c-239759b691b0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] No waiting events found dispatching network-vif-unplugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.166 239460 DEBUG nova.compute.manager [req-e8f31f7a-43fc-4656-9374-fcd59b403422 req-412b0105-46f2-4612-af7c-239759b691b0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-vif-unplugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.168 239460 DEBUG nova.virt.libvirt.vif [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:19:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-573201396',display_name='tempest-VolumesSnapshotTestJSON-instance-573201396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-573201396',id=5,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP57F9qb6HTMT/dg+aHSslBWlwXsZcaffFbF5qLZpeZLt1faAk6NJ/UzAHXHt1SsCajCxlFNwojj/ACHu7g92aCv6V5JBo78DqFMSFFa88vlG4NWfNJ2XKEp1kmwuxVTOg==',key_name='tempest-keypair-1508051679',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:19:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7140162c4cd744d38e65ad1bcdadf016',ramdisk_id='',reservation_id='r-wvxfy812',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-783985999',owner_user_name='tempest-VolumesSnapshotTestJSON-783985999-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:19:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aa90bbad088947a2a9866efeb934031e',uuid=dca948a3-675d-4cc4-a21b-c2f72cbe307e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.168 239460 DEBUG nova.network.os_vif_util [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converting VIF {"id": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "address": "fa:16:3e:a8:bd:99", "network": {"id": "c65d65e6-04af-4892-ad96-3d83d148450f", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-295489753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7140162c4cd744d38e65ad1bcdadf016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3af9ad9d-90", "ovs_interfaceid": "3af9ad9d-906f-4ed9-92cc-783df6775a8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.169 239460 DEBUG nova.network.os_vif_util [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:99,bridge_name='br-int',has_traffic_filtering=True,id=3af9ad9d-906f-4ed9-92cc-783df6775a8e,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af9ad9d-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.169 239460 DEBUG os_vif [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:99,bridge_name='br-int',has_traffic_filtering=True,id=3af9ad9d-906f-4ed9-92cc-783df6775a8e,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af9ad9d-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.171 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.171 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3af9ad9d-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.173 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.174 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.176 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.177 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.177 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.178 239460 INFO os_vif [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:bd:99,bridge_name='br-int',has_traffic_filtering=True,id=3af9ad9d-906f-4ed9-92cc-783df6775a8e,network=Network(c65d65e6-04af-4892-ad96-3d83d148450f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3af9ad9d-90')#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.224 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 29 12:20:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f-userdata-shm.mount: Deactivated successfully.
Jan 29 12:20:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay-59e2c7e54afddf2932a2b7de3280df5b2c20808fa973e96bbd1e587803498a5c-merged.mount: Deactivated successfully.
Jan 29 12:20:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:20:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2783 syncs, 3.81 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4789 writes, 20K keys, 4789 commit groups, 1.0 writes per commit group, ingest: 12.34 MB, 0.02 MB/s#012Interval WAL: 4789 writes, 1866 syncs, 2.57 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:20:27 np0005601226 podman[252164]: 2026-01-29 17:20:27.334413302 +0000 UTC m=+0.316386811 container cleanup 7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.350 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.350 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.351 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.351 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:27 np0005601226 systemd[1]: libpod-conmon-7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f.scope: Deactivated successfully.
Jan 29 12:20:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 29 12:20:27 np0005601226 podman[252220]: 2026-01-29 17:20:27.601378654 +0000 UTC m=+0.253730332 container remove 7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:20:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.605 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd814c9-bd19-4220-9cee-98b957b347c3]: (4, ('Thu Jan 29 05:20:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f (7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f)\n7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f\nThu Jan 29 05:20:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f (7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f)\n7bf778abf479b0e9a1c9fd1d0ce3f0aa0eaf4c16f666f669245eb64282d6442f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.607 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[853697d7-91be-4b60-90ad-1225a56465e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.608 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc65d65e6-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.610 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:27 np0005601226 kernel: tapc65d65e6-00: left promiscuous mode
Jan 29 12:20:27 np0005601226 nova_compute[239456]: 2026-01-29 17:20:27.618 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.622 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3bc249-3385-4709-afe4-f13d58ab74f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.640 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[cd87cb7f-649b-4580-ba10-4cf725ab0c8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.642 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8acb77f6-3179-4ef9-89ba-e130f5cc9209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.655 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8860ca3d-1cd7-4500-ba0f-3a41208b04c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453015, 'reachable_time': 15076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252235, 'error': None, 'target': 'ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:27 np0005601226 systemd[1]: run-netns-ovnmeta\x2dc65d65e6\x2d04af\x2d4892\x2dad96\x2d3d83d148450f.mount: Deactivated successfully.
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.658 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c65d65e6-04af-4892-ad96-3d83d148450f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:20:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:27.659 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[913d0c01-2835-484e-a55b-3fd5dfc673a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 192 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 3.5 MiB/s wr, 105 op/s
Jan 29 12:20:28 np0005601226 nova_compute[239456]: 2026-01-29 17:20:28.852 239460 INFO nova.virt.libvirt.driver [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Deleting instance files /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e_del#033[00m
Jan 29 12:20:28 np0005601226 nova_compute[239456]: 2026-01-29 17:20:28.853 239460 INFO nova.virt.libvirt.driver [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Deletion of /var/lib/nova/instances/dca948a3-675d-4cc4-a21b-c2f72cbe307e_del complete#033[00m
Jan 29 12:20:28 np0005601226 nova_compute[239456]: 2026-01-29 17:20:28.929 239460 INFO nova.compute.manager [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Took 2.22 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:20:28 np0005601226 nova_compute[239456]: 2026-01-29 17:20:28.930 239460 DEBUG oslo.service.loopingcall [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:20:28 np0005601226 nova_compute[239456]: 2026-01-29 17:20:28.931 239460 DEBUG nova.compute.manager [-] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:20:28 np0005601226 nova_compute[239456]: 2026-01-29 17:20:28.931 239460 DEBUG nova.network.neutron [-] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.254 239460 DEBUG nova.compute.manager [req-21c4c3f4-87c2-4d52-8c1e-59a396476a4b req-4ee2c41b-3677-47f0-8695-6b58f43e80a2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.254 239460 DEBUG oslo_concurrency.lockutils [req-21c4c3f4-87c2-4d52-8c1e-59a396476a4b req-4ee2c41b-3677-47f0-8695-6b58f43e80a2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.254 239460 DEBUG oslo_concurrency.lockutils [req-21c4c3f4-87c2-4d52-8c1e-59a396476a4b req-4ee2c41b-3677-47f0-8695-6b58f43e80a2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.255 239460 DEBUG oslo_concurrency.lockutils [req-21c4c3f4-87c2-4d52-8c1e-59a396476a4b req-4ee2c41b-3677-47f0-8695-6b58f43e80a2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.255 239460 DEBUG nova.compute.manager [req-21c4c3f4-87c2-4d52-8c1e-59a396476a4b req-4ee2c41b-3677-47f0-8695-6b58f43e80a2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] No waiting events found dispatching network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.255 239460 WARNING nova.compute.manager [req-21c4c3f4-87c2-4d52-8c1e-59a396476a4b req-4ee2c41b-3677-47f0-8695-6b58f43e80a2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received unexpected event network-vif-plugged-3af9ad9d-906f-4ed9-92cc-783df6775a8e for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.529 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updating instance_info_cache with network_info: [{"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.542 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.543 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.543 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.543 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:29 np0005601226 nova_compute[239456]: 2026-01-29 17:20:29.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:20:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 143 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 492 KiB/s rd, 3.2 MiB/s wr, 164 op/s
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.137 239460 DEBUG nova.network.neutron [-] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.169 239460 INFO nova.compute.manager [-] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Took 1.24 seconds to deallocate network for instance.#033[00m
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.433 239460 WARNING nova.volume.cinder [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Attachment 7b497212-8e97-447e-8737-d4bca8213809 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 7b497212-8e97-447e-8737-d4bca8213809. (HTTP 404) (Request-ID: req-838c977d-fa2d-460a-82b6-051d5ffe15fb)#033[00m
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.434 239460 INFO nova.compute.manager [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Took 0.26 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.486 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.487 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4265926125' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4265926125' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.540 239460 DEBUG oslo_concurrency.processutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:30 np0005601226 nova_compute[239456]: 2026-01-29 17:20:30.968 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:20:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562614458' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:20:31 np0005601226 nova_compute[239456]: 2026-01-29 17:20:31.075 239460 DEBUG oslo_concurrency.processutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:31 np0005601226 nova_compute[239456]: 2026-01-29 17:20:31.082 239460 DEBUG nova.compute.provider_tree [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:20:31 np0005601226 nova_compute[239456]: 2026-01-29 17:20:31.099 239460 DEBUG nova.scheduler.client.report [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:20:31 np0005601226 nova_compute[239456]: 2026-01-29 17:20:31.119 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:31 np0005601226 nova_compute[239456]: 2026-01-29 17:20:31.141 239460 INFO nova.scheduler.client.report [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Deleted allocations for instance dca948a3-675d-4cc4-a21b-c2f72cbe307e#033[00m
Jan 29 12:20:31 np0005601226 nova_compute[239456]: 2026-01-29 17:20:31.199 239460 DEBUG oslo_concurrency.lockutils [None req-3af85969-4135-4c32-899f-8bb5da64a8e6 aa90bbad088947a2a9866efeb934031e 7140162c4cd744d38e65ad1bcdadf016 - - default default] Lock "dca948a3-675d-4cc4-a21b-c2f72cbe307e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:31 np0005601226 nova_compute[239456]: 2026-01-29 17:20:31.329 239460 DEBUG nova.compute.manager [req-84b8627f-a062-4359-929c-ca1e99116e46 req-9a660ac7-d3ca-46a8-af51-badbb4538368 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Received event network-vif-deleted-3af9ad9d-906f-4ed9-92cc-783df6775a8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 121 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 2.5 MiB/s wr, 174 op/s
Jan 29 12:20:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:20:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3273 syncs, 3.64 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4895 writes, 21K keys, 4895 commit groups, 1.0 writes per commit group, ingest: 11.75 MB, 0.02 MB/s#012Interval WAL: 4895 writes, 1988 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:20:32 np0005601226 nova_compute[239456]: 2026-01-29 17:20:32.174 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 29 12:20:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 29 12:20:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 29 12:20:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 121 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 172 KiB/s wr, 103 op/s
Jan 29 12:20:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 29 12:20:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 29 12:20:35 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 29 12:20:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 191 KiB/s wr, 107 op/s
Jan 29 12:20:35 np0005601226 nova_compute[239456]: 2026-01-29 17:20:35.970 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:20:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 8694 writes, 39K keys, 8694 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8694 writes, 2022 syncs, 4.30 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2925 writes, 15K keys, 2925 commit groups, 1.0 writes per commit group, ingest: 7.55 MB, 0.01 MB/s#012Interval WAL: 2925 writes, 1148 syncs, 2.55 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:20:37 np0005601226 nova_compute[239456]: 2026-01-29 17:20:37.176 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:37 np0005601226 nova_compute[239456]: 2026-01-29 17:20:37.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:20:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 50 KiB/s wr, 35 op/s
Jan 29 12:20:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1639512695' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1639512695' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203532102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4203532102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:39 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] Check health
Jan 29 12:20:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 23 KiB/s wr, 90 op/s
Jan 29 12:20:39 np0005601226 podman[252261]: 2026-01-29 17:20:39.8731139 +0000 UTC m=+0.045562636 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:20:39 np0005601226 podman[252262]: 2026-01-29 17:20:39.906843755 +0000 UTC m=+0.079442066 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 12:20:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:40.282 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:40.283 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:40.283 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:20:40
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'backups', 'images', 'default.rgw.control']
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:20:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 29 12:20:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:20:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:20:40 np0005601226 nova_compute[239456]: 2026-01-29 17:20:40.971 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196316387' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1196316387' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 23 KiB/s wr, 81 op/s
Jan 29 12:20:42 np0005601226 nova_compute[239456]: 2026-01-29 17:20:42.141 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707227.1403584, dca948a3-675d-4cc4-a21b-c2f72cbe307e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:20:42 np0005601226 nova_compute[239456]: 2026-01-29 17:20:42.141 239460 INFO nova.compute.manager [-] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:20:42 np0005601226 nova_compute[239456]: 2026-01-29 17:20:42.160 239460 DEBUG nova.compute.manager [None req-bd2f8378-c3f1-43d2-9990-d81e041ed72d - - - - - -] [instance: dca948a3-675d-4cc4-a21b-c2f72cbe307e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:20:42 np0005601226 nova_compute[239456]: 2026-01-29 17:20:42.179 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.6 KiB/s wr, 104 op/s
Jan 29 12:20:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 29 12:20:45 np0005601226 nova_compute[239456]: 2026-01-29 17:20:45.654 239460 DEBUG oslo_concurrency.lockutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:45 np0005601226 nova_compute[239456]: 2026-01-29 17:20:45.654 239460 DEBUG oslo_concurrency.lockutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:45 np0005601226 nova_compute[239456]: 2026-01-29 17:20:45.669 239460 DEBUG nova.objects.instance [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'flavor' on Instance uuid d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 29 12:20:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.6 KiB/s wr, 96 op/s
Jan 29 12:20:45 np0005601226 nova_compute[239456]: 2026-01-29 17:20:45.687 239460 INFO nova.virt.libvirt.driver [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Ignoring supplied device name: /dev/vdb#033[00m
Jan 29 12:20:45 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 29 12:20:45 np0005601226 nova_compute[239456]: 2026-01-29 17:20:45.701 239460 DEBUG oslo_concurrency.lockutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:45 np0005601226 nova_compute[239456]: 2026-01-29 17:20:45.973 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.101 239460 DEBUG oslo_concurrency.lockutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.102 239460 DEBUG oslo_concurrency.lockutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.102 239460 INFO nova.compute.manager [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Attaching volume c8678907-ffe6-402f-94bd-3e91b9827b5f to /dev/vdb#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.334 239460 DEBUG os_brick.utils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.335 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.344 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.344 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c14a2e-b3a5-4696-8ad2-df3f1e307d4c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.345 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.351 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.351 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[b127e348-a15d-4190-94a4-f3a51c1a41ad]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.352 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.359 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.359 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[18ff0a91-eec1-4bdf-80cf-f1ec955c7d3d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.360 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[07cbbfdf-6c0f-4cbf-8088-acbf2329bbec]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.361 239460 DEBUG oslo_concurrency.processutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.375 239460 DEBUG oslo_concurrency.processutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "nvme version" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.377 239460 DEBUG os_brick.initiator.connectors.lightos [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.378 239460 DEBUG os_brick.initiator.connectors.lightos [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.378 239460 DEBUG os_brick.initiator.connectors.lightos [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.378 239460 DEBUG os_brick.utils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] <== get_connector_properties: return (44ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:20:46 np0005601226 nova_compute[239456]: 2026-01-29 17:20:46.379 239460 DEBUG nova.virt.block_device [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updating existing volume attachment record: a99b5dc4-9535-41f5-ac1f-1c2d8512fa4f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.181 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:20:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3839533154' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.338 239460 DEBUG nova.objects.instance [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'flavor' on Instance uuid d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.360 239460 DEBUG nova.virt.libvirt.driver [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Attempting to attach volume c8678907-ffe6-402f-94bd-3e91b9827b5f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.364 239460 DEBUG nova.virt.libvirt.guest [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:20:47 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:20:47 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-c8678907-ffe6-402f-94bd-3e91b9827b5f">
Jan 29 12:20:47 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:47 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:20:47 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:20:47 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:20:47 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:20:47 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:20:47 np0005601226 nova_compute[239456]:  <serial>c8678907-ffe6-402f-94bd-3e91b9827b5f</serial>
Jan 29 12:20:47 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:20:47 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.441 239460 DEBUG nova.virt.libvirt.driver [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.442 239460 DEBUG nova.virt.libvirt.driver [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.442 239460 DEBUG nova.virt.libvirt.driver [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.442 239460 DEBUG nova.virt.libvirt.driver [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No VIF found with MAC fa:16:3e:8f:4b:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:20:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 29 12:20:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 29 12:20:47 np0005601226 nova_compute[239456]: 2026-01-29 17:20:47.612 239460 DEBUG oslo_concurrency.lockutils [None req-a45582c3-575b-45c5-86ad-be45432a0bca d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 29 12:20:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.8 KiB/s wr, 48 op/s
Jan 29 12:20:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705441273' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3705441273' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:20:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851349279' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:20:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 29 12:20:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 29 12:20:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 29 12:20:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 5.8 KiB/s wr, 73 op/s
Jan 29 12:20:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 29 12:20:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 29 12:20:50 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 29 12:20:50 np0005601226 nova_compute[239456]: 2026-01-29 17:20:50.975 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007606565373991976 of space, bias 1.0, pg target 0.22819696121975927 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.729064373064299e-06 of space, bias 1.0, pg target 0.00201871931191929 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2163896381199076e-07 of space, bias 1.0, pg target 3.649168914359723e-05 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660124500600312 of space, bias 1.0, pg target 0.19980373501800938 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.6133347595934214e-06 of space, bias 4.0, pg target 0.0019360017115121057 quantized to 16 (current 16)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:20:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 61 op/s
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610473001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610473001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:52 np0005601226 nova_compute[239456]: 2026-01-29 17:20:52.185 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 29 12:20:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 29 12:20:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:20:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3273262997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:20:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:20:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3273262997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:20:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 4.7 KiB/s wr, 112 op/s
Jan 29 12:20:53 np0005601226 nova_compute[239456]: 2026-01-29 17:20:53.873 239460 DEBUG oslo_concurrency.lockutils [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:53 np0005601226 nova_compute[239456]: 2026-01-29 17:20:53.873 239460 DEBUG oslo_concurrency.lockutils [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:53 np0005601226 nova_compute[239456]: 2026-01-29 17:20:53.888 239460 INFO nova.compute.manager [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Detaching volume c8678907-ffe6-402f-94bd-3e91b9827b5f#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.237 239460 INFO nova.virt.block_device [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Attempting to driver detach volume c8678907-ffe6-402f-94bd-3e91b9827b5f from mountpoint /dev/vdb#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.246 239460 DEBUG nova.virt.libvirt.driver [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Attempting to detach device vdb from instance d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.247 239460 DEBUG nova.virt.libvirt.guest [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-c8678907-ffe6-402f-94bd-3e91b9827b5f">
Jan 29 12:20:54 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <serial>c8678907-ffe6-402f-94bd-3e91b9827b5f</serial>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:20:54 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.253 239460 INFO nova.virt.libvirt.driver [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully detached device vdb from instance d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 from the persistent domain config.#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.253 239460 DEBUG nova.virt.libvirt.driver [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.254 239460 DEBUG nova.virt.libvirt.guest [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-c8678907-ffe6-402f-94bd-3e91b9827b5f">
Jan 29 12:20:54 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <serial>c8678907-ffe6-402f-94bd-3e91b9827b5f</serial>
Jan 29 12:20:54 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:20:54 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:20:54 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.352 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707254.3517559, d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.353 239460 DEBUG nova.virt.libvirt.driver [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.355 239460 INFO nova.virt.libvirt.driver [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully detached device vdb from instance d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 from the live domain config.#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.554 239460 DEBUG nova.objects.instance [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'flavor' on Instance uuid d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:54 np0005601226 nova_compute[239456]: 2026-01-29 17:20:54.600 239460 DEBUG oslo_concurrency.lockutils [None req-c624be7c-ddf7-4dfa-96ac-d42013919e2a d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.280 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.281 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.281 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.282 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.282 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.283 239460 INFO nova.compute.manager [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Terminating instance#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.284 239460 DEBUG nova.compute.manager [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:20:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:20:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:55 np0005601226 kernel: tap6be42760-ad (unregistering): left promiscuous mode
Jan 29 12:20:55 np0005601226 NetworkManager[49020]: <info>  [1769707255.4798] device (tap6be42760-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.479 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:55Z|00076|binding|INFO|Releasing lport 6be42760-adf3-45d0-ae0d-44d988848eb0 from this chassis (sb_readonly=0)
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.488 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:55Z|00077|binding|INFO|Setting lport 6be42760-adf3-45d0-ae0d-44d988848eb0 down in Southbound
Jan 29 12:20:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:20:55Z|00078|binding|INFO|Removing iface tap6be42760-ad ovn-installed in OVS
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.490 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.494 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.494 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:4b:3a 10.100.0.6'], port_security=['fa:16:3e:8f:4b:3a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd64d6fd1-4f7b-4765-8b1c-1b7e6d42c455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815af3cf993b45cc8f2cdf73bf1d552c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '08229a17-4a48-4b26-bd20-8db0c8a3185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ddf8c3b-2084-4923-8e76-31ca07b64cbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=6be42760-adf3-45d0-ae0d-44d988848eb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.496 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 6be42760-adf3-45d0-ae0d-44d988848eb0 in datapath 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 unbound from our chassis#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.497 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.497 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d8261ee1-3d3a-4fea-87c4-4fbe974daed4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.498 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 namespace which is not needed anymore#033[00m
Jan 29 12:20:55 np0005601226 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 29 12:20:55 np0005601226 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.325s CPU time.
Jan 29 12:20:55 np0005601226 systemd-machined[207561]: Machine qemu-6-instance-00000006 terminated.
Jan 29 12:20:55 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[252024]: [NOTICE]   (252049) : haproxy version is 2.8.14-c23fe91
Jan 29 12:20:55 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[252024]: [NOTICE]   (252049) : path to executable is /usr/sbin/haproxy
Jan 29 12:20:55 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[252024]: [WARNING]  (252049) : Exiting Master process...
Jan 29 12:20:55 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[252024]: [ALERT]    (252049) : Current worker (252055) exited with code 143 (Terminated)
Jan 29 12:20:55 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[252024]: [WARNING]  (252049) : All workers exited. Exiting... (0)
Jan 29 12:20:55 np0005601226 systemd[1]: libpod-aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619.scope: Deactivated successfully.
Jan 29 12:20:55 np0005601226 podman[252476]: 2026-01-29 17:20:55.63692328 +0000 UTC m=+0.068269243 container died aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:20:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 8.8 KiB/s wr, 151 op/s
Jan 29 12:20:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay-213c63dfbedd898219df26a3552dda46baf1b98afc2992dd32493a5268a83008-merged.mount: Deactivated successfully.
Jan 29 12:20:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619-userdata-shm.mount: Deactivated successfully.
Jan 29 12:20:55 np0005601226 podman[252476]: 2026-01-29 17:20:55.70256565 +0000 UTC m=+0.133911613 container cleanup aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.704 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 systemd[1]: libpod-conmon-aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619.scope: Deactivated successfully.
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.709 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.725 239460 INFO nova.virt.libvirt.driver [-] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Instance destroyed successfully.#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.726 239460 DEBUG nova.objects.instance [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'resources' on Instance uuid d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.742 239460 DEBUG nova.virt.libvirt.vif [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:20:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1126750815',display_name='tempest-VolumesBackupsTest-instance-1126750815',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1126750815',id=6,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMzDsCY+7iLKyxKR/RPqyhuejs3RxupkCpjwrcLLN6bwiFn7asDIiuGZ3fgfzWQBWbR6PuAecg7zh1hlNNafsXWsMe0hZXYH/C8lEs9aP+WdD0oobkGb2HMs4pRlFxTogQ==',key_name='tempest-keypair-2099588373',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:20:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='815af3cf993b45cc8f2cdf73bf1d552c',ramdisk_id='',reservation_id='r-1fm2umih',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-2142983406',owner_user_name='tempest-VolumesBackupsTest-2142983406-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:20:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3463a84af564b968e67b687bc895548',uuid=d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.743 239460 DEBUG nova.network.os_vif_util [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converting VIF {"id": "6be42760-adf3-45d0-ae0d-44d988848eb0", "address": "fa:16:3e:8f:4b:3a", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6be42760-ad", "ovs_interfaceid": "6be42760-adf3-45d0-ae0d-44d988848eb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.744 239460 DEBUG nova.network.os_vif_util [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:4b:3a,bridge_name='br-int',has_traffic_filtering=True,id=6be42760-adf3-45d0-ae0d-44d988848eb0,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6be42760-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.744 239460 DEBUG os_vif [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:4b:3a,bridge_name='br-int',has_traffic_filtering=True,id=6be42760-adf3-45d0-ae0d-44d988848eb0,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6be42760-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.746 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.746 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6be42760-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.748 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.750 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.751 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.753 239460 INFO os_vif [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:4b:3a,bridge_name='br-int',has_traffic_filtering=True,id=6be42760-adf3-45d0-ae0d-44d988848eb0,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6be42760-ad')#033[00m
Jan 29 12:20:55 np0005601226 podman[252512]: 2026-01-29 17:20:55.76855907 +0000 UTC m=+0.048396524 container remove aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.771 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[698f3adf-d6bf-41c2-8b15-bf73f8a3849c]: (4, ('Thu Jan 29 05:20:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 (aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619)\naaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619\nThu Jan 29 05:20:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 (aaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619)\naaf77e603efec493bafad2d2aa04ed17c59ebc543de9909e0139eabe04979619\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.773 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1deb38f4-3778-4988-af3c-6fc32430b710]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.775 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap765ab7c4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:20:55 np0005601226 kernel: tap765ab7c4-f0: left promiscuous mode
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.779 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.781 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[328d33d0-b739-4647-889a-6fe0a7a3c374]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.784 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.790 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[488c7dc9-d2d7-4c47-ac7b-d010b5c3a0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.792 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[09088fd4-4b04-49a7-8c68-95bc5fbff556]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.805 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcceeab-18f7-4ffc-9521-4e7862dc1198]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456401, 'reachable_time': 41809, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252557, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.807 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:20:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:55.807 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[db0d9bed-e0bb-4a88-9a95-b41985c58f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:20:55 np0005601226 systemd[1]: run-netns-ovnmeta\x2d765ab7c4\x2df6eb\x2d4a45\x2d8c1b\x2d00dc61ad3441.mount: Deactivated successfully.
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.854 239460 DEBUG nova.compute.manager [req-ef91af36-9f4e-4308-82d3-afb76939fd3b req-8e212633-2fff-41db-b470-19cf056daf8d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-vif-unplugged-6be42760-adf3-45d0-ae0d-44d988848eb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.855 239460 DEBUG oslo_concurrency.lockutils [req-ef91af36-9f4e-4308-82d3-afb76939fd3b req-8e212633-2fff-41db-b470-19cf056daf8d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.855 239460 DEBUG oslo_concurrency.lockutils [req-ef91af36-9f4e-4308-82d3-afb76939fd3b req-8e212633-2fff-41db-b470-19cf056daf8d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.855 239460 DEBUG oslo_concurrency.lockutils [req-ef91af36-9f4e-4308-82d3-afb76939fd3b req-8e212633-2fff-41db-b470-19cf056daf8d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.855 239460 DEBUG nova.compute.manager [req-ef91af36-9f4e-4308-82d3-afb76939fd3b req-8e212633-2fff-41db-b470-19cf056daf8d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] No waiting events found dispatching network-vif-unplugged-6be42760-adf3-45d0-ae0d-44d988848eb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.855 239460 DEBUG nova.compute.manager [req-ef91af36-9f4e-4308-82d3-afb76939fd3b req-8e212633-2fff-41db-b470-19cf056daf8d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-vif-unplugged-6be42760-adf3-45d0-ae0d-44d988848eb0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:20:55 np0005601226 nova_compute[239456]: 2026-01-29 17:20:55.977 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.063 239460 INFO nova.virt.libvirt.driver [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Deleting instance files /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_del#033[00m
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.064 239460 INFO nova.virt.libvirt.driver [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Deletion of /var/lib/nova/instances/d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455_del complete#033[00m
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.107 239460 INFO nova.compute.manager [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.107 239460 DEBUG oslo.service.loopingcall [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.108 239460 DEBUG nova.compute.manager [-] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.108 239460 DEBUG nova.network.neutron [-] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:20:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:56.698 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:20:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:20:56.699 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.743 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:56 np0005601226 podman[252707]: 2026-01-29 17:20:56.844229384 +0000 UTC m=+0.146468764 container create b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 29 12:20:56 np0005601226 podman[252707]: 2026-01-29 17:20:56.755410266 +0000 UTC m=+0.057649666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.908 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:56 np0005601226 systemd[1]: Started libpod-conmon-b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b.scope.
Jan 29 12:20:56 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:20:56 np0005601226 nova_compute[239456]: 2026-01-29 17:20:56.948 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:20:56 np0005601226 podman[252707]: 2026-01-29 17:20:56.982054303 +0000 UTC m=+0.284293713 container init b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:20:56 np0005601226 podman[252707]: 2026-01-29 17:20:56.987734077 +0000 UTC m=+0.289973457 container start b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:20:56 np0005601226 angry_aryabhata[252723]: 167 167
Jan 29 12:20:56 np0005601226 systemd[1]: libpod-b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b.scope: Deactivated successfully.
Jan 29 12:20:57 np0005601226 nova_compute[239456]: 2026-01-29 17:20:57.227 239460 DEBUG nova.network.neutron [-] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:20:57 np0005601226 nova_compute[239456]: 2026-01-29 17:20:57.247 239460 INFO nova.compute.manager [-] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Took 1.14 seconds to deallocate network for instance.#033[00m
Jan 29 12:20:57 np0005601226 nova_compute[239456]: 2026-01-29 17:20:57.290 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:57 np0005601226 nova_compute[239456]: 2026-01-29 17:20:57.290 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:57 np0005601226 nova_compute[239456]: 2026-01-29 17:20:57.344 239460 DEBUG nova.compute.manager [req-d1adb3f1-8b2a-4ce4-b3a3-11b57f702632 req-3a0ccf53-9c0b-4ca1-bf8c-2be32043c2a2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-vif-deleted-6be42760-adf3-45d0-ae0d-44d988848eb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.6 KiB/s wr, 114 op/s
Jan 29 12:20:57 np0005601226 nova_compute[239456]: 2026-01-29 17:20:57.881 239460 DEBUG oslo_concurrency.processutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:20:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:20:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 29 12:20:57 np0005601226 podman[252707]: 2026-01-29 17:20:57.961161908 +0000 UTC m=+1.263401288 container attach b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_aryabhata, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:20:57 np0005601226 podman[252707]: 2026-01-29 17:20:57.962240628 +0000 UTC m=+1.264480038 container died b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_aryabhata, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 12:20:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 29 12:20:58 np0005601226 systemd[1]: var-lib-containers-storage-overlay-38c1a7a7e5bd29071e07551cadd300d81304edf6ae263aa1d90cc2d721bc732f-merged.mount: Deactivated successfully.
Jan 29 12:20:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.423 239460 DEBUG nova.compute.manager [req-f5a508eb-f0a6-4622-a0ab-67b0dec757d2 req-311f933f-0c93-4c76-9139-dcda126ae6eb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received event network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.424 239460 DEBUG oslo_concurrency.lockutils [req-f5a508eb-f0a6-4622-a0ab-67b0dec757d2 req-311f933f-0c93-4c76-9139-dcda126ae6eb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.424 239460 DEBUG oslo_concurrency.lockutils [req-f5a508eb-f0a6-4622-a0ab-67b0dec757d2 req-311f933f-0c93-4c76-9139-dcda126ae6eb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.425 239460 DEBUG oslo_concurrency.lockutils [req-f5a508eb-f0a6-4622-a0ab-67b0dec757d2 req-311f933f-0c93-4c76-9139-dcda126ae6eb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.425 239460 DEBUG nova.compute.manager [req-f5a508eb-f0a6-4622-a0ab-67b0dec757d2 req-311f933f-0c93-4c76-9139-dcda126ae6eb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] No waiting events found dispatching network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.425 239460 WARNING nova.compute.manager [req-f5a508eb-f0a6-4622-a0ab-67b0dec757d2 req-311f933f-0c93-4c76-9139-dcda126ae6eb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Received unexpected event network-vif-plugged-6be42760-adf3-45d0-ae0d-44d988848eb0 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:20:58 np0005601226 podman[252707]: 2026-01-29 17:20:58.437711333 +0000 UTC m=+1.739950713 container remove b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:20:58 np0005601226 systemd[1]: libpod-conmon-b204321def91eb46b273659d9ca2626dc950bdca3708503931e9c52f6b21018b.scope: Deactivated successfully.
Jan 29 12:20:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:20:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236146623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.607 239460 DEBUG oslo_concurrency.processutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.726s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.611 239460 DEBUG nova.compute.provider_tree [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.636 239460 DEBUG nova.scheduler.client.report [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.664 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:58 np0005601226 podman[252769]: 2026-01-29 17:20:58.569868218 +0000 UTC m=+0.018985697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.708 239460 INFO nova.scheduler.client.report [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Deleted allocations for instance d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455#033[00m
Jan 29 12:20:58 np0005601226 nova_compute[239456]: 2026-01-29 17:20:58.793 239460 DEBUG oslo_concurrency.lockutils [None req-3dd8a2e4-934e-498a-b525-8097dd5c1b01 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:20:58 np0005601226 podman[252769]: 2026-01-29 17:20:58.838476122 +0000 UTC m=+0.287593581 container create e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 12:20:59 np0005601226 systemd[1]: Started libpod-conmon-e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751.scope.
Jan 29 12:20:59 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:20:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aac6ddb868daac0ed54e4b2543898e96954285d112efec84a271ad5969ca42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:20:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aac6ddb868daac0ed54e4b2543898e96954285d112efec84a271ad5969ca42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:20:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aac6ddb868daac0ed54e4b2543898e96954285d112efec84a271ad5969ca42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:20:59 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0aac6ddb868daac0ed54e4b2543898e96954285d112efec84a271ad5969ca42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:20:59 np0005601226 podman[252769]: 2026-01-29 17:20:59.252595514 +0000 UTC m=+0.701712993 container init e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:20:59 np0005601226 podman[252769]: 2026-01-29 17:20:59.258376031 +0000 UTC m=+0.707493490 container start e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 12:20:59 np0005601226 podman[252769]: 2026-01-29 17:20:59.324494105 +0000 UTC m=+0.773611574 container attach e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:20:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 8.4 KiB/s wr, 155 op/s
Jan 29 12:20:59 np0005601226 reverent_ride[252788]: [
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:    {
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "available": false,
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "being_replaced": false,
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "ceph_device_lvm": false,
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "lsm_data": {},
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "lvs": [],
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "path": "/dev/sr0",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "rejected_reasons": [
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "Has a FileSystem",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "Insufficient space (<5GB)"
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        ],
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        "sys_api": {
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "actuators": null,
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "device_nodes": [
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:                "sr0"
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            ],
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "devname": "sr0",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "human_readable_size": "482.00 KB",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "id_bus": "ata",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "model": "QEMU DVD-ROM",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "nr_requests": "2",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "parent": "/dev/sr0",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "partitions": {},
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "path": "/dev/sr0",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "removable": "1",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "rev": "2.5+",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "ro": "0",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "rotational": "1",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "sas_address": "",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "sas_device_handle": "",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "scheduler_mode": "mq-deadline",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "sectors": 0,
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "sectorsize": "2048",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "size": 493568.0,
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "support_discard": "2048",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "type": "disk",
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:            "vendor": "QEMU"
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:        }
Jan 29 12:20:59 np0005601226 reverent_ride[252788]:    }
Jan 29 12:20:59 np0005601226 reverent_ride[252788]: ]
Jan 29 12:20:59 np0005601226 systemd[1]: libpod-e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751.scope: Deactivated successfully.
Jan 29 12:20:59 np0005601226 podman[252769]: 2026-01-29 17:20:59.818678927 +0000 UTC m=+1.267796386 container died e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:21:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a0aac6ddb868daac0ed54e4b2543898e96954285d112efec84a271ad5969ca42-merged.mount: Deactivated successfully.
Jan 29 12:21:00 np0005601226 podman[252769]: 2026-01-29 17:21:00.654071625 +0000 UTC m=+2.103189084 container remove e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 12:21:00 np0005601226 systemd[1]: libpod-conmon-e18bddebc6670b4bc2a865756c3002c005e96e073526031b81f1e77e32ae7751.scope: Deactivated successfully.
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:21:00 np0005601226 nova_compute[239456]: 2026-01-29 17:21:00.750 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:21:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:21:00 np0005601226 nova_compute[239456]: 2026-01-29 17:21:00.978 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:01 np0005601226 podman[253718]: 2026-01-29 17:21:01.195096719 +0000 UTC m=+0.068217421 container create 51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:21:01 np0005601226 podman[253718]: 2026-01-29 17:21:01.151091275 +0000 UTC m=+0.024212047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:21:01 np0005601226 systemd[1]: Started libpod-conmon-51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f.scope.
Jan 29 12:21:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:21:01 np0005601226 podman[253718]: 2026-01-29 17:21:01.42555771 +0000 UTC m=+0.298678432 container init 51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 12:21:01 np0005601226 podman[253718]: 2026-01-29 17:21:01.431258134 +0000 UTC m=+0.304378836 container start 51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_wu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:21:01 np0005601226 condescending_wu[253734]: 167 167
Jan 29 12:21:01 np0005601226 systemd[1]: libpod-51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f.scope: Deactivated successfully.
Jan 29 12:21:01 np0005601226 conmon[253734]: conmon 51b5d3327cfcf3cb7da2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f.scope/container/memory.events
Jan 29 12:21:01 np0005601226 podman[253718]: 2026-01-29 17:21:01.541600957 +0000 UTC m=+0.414721679 container attach 51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 12:21:01 np0005601226 podman[253718]: 2026-01-29 17:21:01.5424378 +0000 UTC m=+0.415558502 container died 51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_wu, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:21:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 41 MiB data, 233 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 7.6 KiB/s wr, 142 op/s
Jan 29 12:21:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:21:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:01 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:21:01 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fa368df1be37d19d455ef6a6fc5fb64013defd9e028b113af41db236736f288d-merged.mount: Deactivated successfully.
Jan 29 12:21:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 29 12:21:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 29 12:21:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 29 12:21:02 np0005601226 podman[253718]: 2026-01-29 17:21:02.216794829 +0000 UTC m=+1.089915531 container remove 51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_wu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:21:02 np0005601226 systemd[1]: libpod-conmon-51b5d3327cfcf3cb7da2082a7f907278df6fe39e6e2145af109cd9571135b38f.scope: Deactivated successfully.
Jan 29 12:21:02 np0005601226 podman[253758]: 2026-01-29 17:21:02.316931455 +0000 UTC m=+0.021453253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:21:02 np0005601226 podman[253758]: 2026-01-29 17:21:02.559181066 +0000 UTC m=+0.263702784 container create 7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:21:02 np0005601226 systemd[1]: Started libpod-conmon-7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109.scope.
Jan 29 12:21:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:21:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad6bdb80a335752dde000c6b1354393a031e126f8f22f8be607d6470e06407/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad6bdb80a335752dde000c6b1354393a031e126f8f22f8be607d6470e06407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad6bdb80a335752dde000c6b1354393a031e126f8f22f8be607d6470e06407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad6bdb80a335752dde000c6b1354393a031e126f8f22f8be607d6470e06407/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26ad6bdb80a335752dde000c6b1354393a031e126f8f22f8be607d6470e06407/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:02 np0005601226 podman[253758]: 2026-01-29 17:21:02.703761667 +0000 UTC m=+0.408283405 container init 7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:21:02 np0005601226 podman[253758]: 2026-01-29 17:21:02.709285317 +0000 UTC m=+0.413807025 container start 7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:21:02 np0005601226 podman[253758]: 2026-01-29 17:21:02.756734904 +0000 UTC m=+0.461256622 container attach 7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:21:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:21:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3854137792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:21:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:03 np0005601226 priceless_chatterjee[253775]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:21:03 np0005601226 priceless_chatterjee[253775]: --> All data devices are unavailable
Jan 29 12:21:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3444324062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3444324062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:03 np0005601226 systemd[1]: libpod-7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109.scope: Deactivated successfully.
Jan 29 12:21:03 np0005601226 podman[253758]: 2026-01-29 17:21:03.119952435 +0000 UTC m=+0.824474163 container died 7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 12:21:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 29 12:21:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 29 12:21:03 np0005601226 systemd[1]: var-lib-containers-storage-overlay-26ad6bdb80a335752dde000c6b1354393a031e126f8f22f8be607d6470e06407-merged.mount: Deactivated successfully.
Jan 29 12:21:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 29 12:21:03 np0005601226 podman[253758]: 2026-01-29 17:21:03.198780333 +0000 UTC m=+0.903302051 container remove 7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:21:03 np0005601226 systemd[1]: libpod-conmon-7e89bd81be23585de01c752ecd87a0c90a3719784f922133412324adae143109.scope: Deactivated successfully.
Jan 29 12:21:03 np0005601226 podman[253870]: 2026-01-29 17:21:03.571968804 +0000 UTC m=+0.033279303 container create cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:21:03 np0005601226 systemd[1]: Started libpod-conmon-cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5.scope.
Jan 29 12:21:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:21:03 np0005601226 podman[253870]: 2026-01-29 17:21:03.626525344 +0000 UTC m=+0.087835853 container init cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:21:03 np0005601226 podman[253870]: 2026-01-29 17:21:03.631153059 +0000 UTC m=+0.092463558 container start cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hellman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:21:03 np0005601226 fervent_hellman[253887]: 167 167
Jan 29 12:21:03 np0005601226 systemd[1]: libpod-cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5.scope: Deactivated successfully.
Jan 29 12:21:03 np0005601226 podman[253870]: 2026-01-29 17:21:03.634802649 +0000 UTC m=+0.096113178 container attach cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hellman, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:21:03 np0005601226 podman[253870]: 2026-01-29 17:21:03.635731934 +0000 UTC m=+0.097042443 container died cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:21:03 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ee41e2d53af976ca50872eec9663c2af2747ce3b7fb14f81e2834a62bf356f30-merged.mount: Deactivated successfully.
Jan 29 12:21:03 np0005601226 podman[253870]: 2026-01-29 17:21:03.558415286 +0000 UTC m=+0.019725805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:21:03 np0005601226 podman[253870]: 2026-01-29 17:21:03.669933451 +0000 UTC m=+0.131243950 container remove cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hellman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:21:03 np0005601226 systemd[1]: libpod-conmon-cea56f2473f2eba7fd4d791013bb56524c61601fb9f509cd7c8ddc50c2b2aab5.scope: Deactivated successfully.
Jan 29 12:21:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.5 KiB/s wr, 74 op/s
Jan 29 12:21:03 np0005601226 podman[253909]: 2026-01-29 17:21:03.781296482 +0000 UTC m=+0.033297404 container create f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_beaver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 12:21:03 np0005601226 systemd[1]: Started libpod-conmon-f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c.scope.
Jan 29 12:21:03 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:21:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fbf1eb0efc83962958ea6ca0176daab6e72df020a6e421b0b6f5b7af65f80e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fbf1eb0efc83962958ea6ca0176daab6e72df020a6e421b0b6f5b7af65f80e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fbf1eb0efc83962958ea6ca0176daab6e72df020a6e421b0b6f5b7af65f80e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:03 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fbf1eb0efc83962958ea6ca0176daab6e72df020a6e421b0b6f5b7af65f80e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:03 np0005601226 podman[253909]: 2026-01-29 17:21:03.840921939 +0000 UTC m=+0.092922881 container init f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:21:03 np0005601226 podman[253909]: 2026-01-29 17:21:03.845898784 +0000 UTC m=+0.097899706 container start f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_beaver, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 12:21:03 np0005601226 podman[253909]: 2026-01-29 17:21:03.858310741 +0000 UTC m=+0.110311683 container attach f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_beaver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:21:03 np0005601226 podman[253909]: 2026-01-29 17:21:03.766659895 +0000 UTC m=+0.018660837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]: {
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:    "0": [
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:        {
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "devices": [
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "/dev/loop3"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            ],
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_name": "ceph_lv0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_size": "21470642176",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "name": "ceph_lv0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "tags": {
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cluster_name": "ceph",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.crush_device_class": "",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.encrypted": "0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.objectstore": "bluestore",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osd_id": "0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.type": "block",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.vdo": "0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.with_tpm": "0"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            },
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "type": "block",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "vg_name": "ceph_vg0"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:        }
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:    ],
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:    "1": [
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:        {
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "devices": [
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "/dev/loop4"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            ],
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_name": "ceph_lv1",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_size": "21470642176",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "name": "ceph_lv1",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "tags": {
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cluster_name": "ceph",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.crush_device_class": "",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.encrypted": "0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.objectstore": "bluestore",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osd_id": "1",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.type": "block",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.vdo": "0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.with_tpm": "0"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            },
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "type": "block",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "vg_name": "ceph_vg1"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:        }
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:    ],
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:    "2": [
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:        {
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "devices": [
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "/dev/loop5"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            ],
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_name": "ceph_lv2",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_size": "21470642176",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "name": "ceph_lv2",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "tags": {
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.cluster_name": "ceph",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.crush_device_class": "",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.encrypted": "0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.objectstore": "bluestore",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osd_id": "2",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.type": "block",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.vdo": "0",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:                "ceph.with_tpm": "0"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            },
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "type": "block",
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:            "vg_name": "ceph_vg2"
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:        }
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]:    ]
Jan 29 12:21:04 np0005601226 romantic_beaver[253925]: }
Jan 29 12:21:04 np0005601226 systemd[1]: libpod-f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c.scope: Deactivated successfully.
Jan 29 12:21:04 np0005601226 podman[253909]: 2026-01-29 17:21:04.110957413 +0000 UTC m=+0.362958335 container died f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:21:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-66fbf1eb0efc83962958ea6ca0176daab6e72df020a6e421b0b6f5b7af65f80e-merged.mount: Deactivated successfully.
Jan 29 12:21:04 np0005601226 podman[253909]: 2026-01-29 17:21:04.177978101 +0000 UTC m=+0.429979023 container remove f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_beaver, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:21:04 np0005601226 systemd[1]: libpod-conmon-f53cc834b633eb4f56cd9db5b7a3de2b085a491b1d5bbc4b04ad5033d5728a1c.scope: Deactivated successfully.
Jan 29 12:21:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 29 12:21:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 29 12:21:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 29 12:21:04 np0005601226 podman[254006]: 2026-01-29 17:21:04.601412915 +0000 UTC m=+0.039409760 container create 99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Jan 29 12:21:04 np0005601226 systemd[1]: Started libpod-conmon-99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2.scope.
Jan 29 12:21:04 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:21:04 np0005601226 podman[254006]: 2026-01-29 17:21:04.677611132 +0000 UTC m=+0.115607997 container init 99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:21:04 np0005601226 podman[254006]: 2026-01-29 17:21:04.584285111 +0000 UTC m=+0.022281976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:21:04 np0005601226 podman[254006]: 2026-01-29 17:21:04.685504746 +0000 UTC m=+0.123501591 container start 99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:21:04 np0005601226 nervous_jones[254022]: 167 167
Jan 29 12:21:04 np0005601226 systemd[1]: libpod-99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2.scope: Deactivated successfully.
Jan 29 12:21:04 np0005601226 podman[254006]: 2026-01-29 17:21:04.693348019 +0000 UTC m=+0.131344894 container attach 99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:21:04 np0005601226 podman[254006]: 2026-01-29 17:21:04.695465077 +0000 UTC m=+0.133461942 container died 99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 12:21:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:21:04.701 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:21:04 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4d45eadd8c0a6381f008fc7e0afb468182dbe5033ca080b9f7d59b77b49cc88e-merged.mount: Deactivated successfully.
Jan 29 12:21:04 np0005601226 podman[254006]: 2026-01-29 17:21:04.743751296 +0000 UTC m=+0.181748141 container remove 99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_jones, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:21:04 np0005601226 systemd[1]: libpod-conmon-99e81c068f37cd86b20d7b734d5faeb3ce238de343eaec71da34b987c4a7e2d2.scope: Deactivated successfully.
Jan 29 12:21:04 np0005601226 podman[254045]: 2026-01-29 17:21:04.864150641 +0000 UTC m=+0.042279337 container create 91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 29 12:21:04 np0005601226 systemd[1]: Started libpod-conmon-91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5.scope.
Jan 29 12:21:04 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:21:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1528115fa19b925c86144e6fd373297e4e5b83c33f7f5d7996a3ec650ab9897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1528115fa19b925c86144e6fd373297e4e5b83c33f7f5d7996a3ec650ab9897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1528115fa19b925c86144e6fd373297e4e5b83c33f7f5d7996a3ec650ab9897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1528115fa19b925c86144e6fd373297e4e5b83c33f7f5d7996a3ec650ab9897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:21:04 np0005601226 podman[254045]: 2026-01-29 17:21:04.925710591 +0000 UTC m=+0.103839297 container init 91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 12:21:04 np0005601226 podman[254045]: 2026-01-29 17:21:04.931371654 +0000 UTC m=+0.109500330 container start 91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:21:04 np0005601226 podman[254045]: 2026-01-29 17:21:04.937926793 +0000 UTC m=+0.116055489 container attach 91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 12:21:04 np0005601226 podman[254045]: 2026-01-29 17:21:04.845712781 +0000 UTC m=+0.023841477 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:21:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 29 12:21:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 29 12:21:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 29 12:21:05 np0005601226 lvm[254142]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:21:05 np0005601226 lvm[254142]: VG ceph_vg1 finished
Jan 29 12:21:05 np0005601226 lvm[254139]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:21:05 np0005601226 lvm[254139]: VG ceph_vg0 finished
Jan 29 12:21:05 np0005601226 lvm[254144]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:21:05 np0005601226 lvm[254144]: VG ceph_vg2 finished
Jan 29 12:21:05 np0005601226 flamboyant_brown[254062]: {}
Jan 29 12:21:05 np0005601226 systemd[1]: libpod-91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5.scope: Deactivated successfully.
Jan 29 12:21:05 np0005601226 podman[254045]: 2026-01-29 17:21:05.642712068 +0000 UTC m=+0.820840744 container died 91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:21:05 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c1528115fa19b925c86144e6fd373297e4e5b83c33f7f5d7996a3ec650ab9897-merged.mount: Deactivated successfully.
Jan 29 12:21:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 6.5 KiB/s wr, 113 op/s
Jan 29 12:21:05 np0005601226 podman[254045]: 2026-01-29 17:21:05.697593796 +0000 UTC m=+0.875722472 container remove 91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 12:21:05 np0005601226 systemd[1]: libpod-conmon-91d7a027373c10281c7ac06fe3324ca05799901c66f8cb937c3b04914f4305f5.scope: Deactivated successfully.
Jan 29 12:21:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:21:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:21:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:05 np0005601226 nova_compute[239456]: 2026-01-29 17:21:05.754 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:05 np0005601226 nova_compute[239456]: 2026-01-29 17:21:05.979 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:06 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:21:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:21:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/861631095' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:21:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.7 KiB/s wr, 82 op/s
Jan 29 12:21:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 29 12:21:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 29 12:21:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 29 12:21:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 29 12:21:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 29 12:21:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 29 12:21:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.6 KiB/s wr, 136 op/s
Jan 29 12:21:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:21:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:21:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:21:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:21:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:21:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:21:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 29 12:21:10 np0005601226 nova_compute[239456]: 2026-01-29 17:21:10.721 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707255.7194972, d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:21:10 np0005601226 nova_compute[239456]: 2026-01-29 17:21:10.721 239460 INFO nova.compute.manager [-] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:21:10 np0005601226 nova_compute[239456]: 2026-01-29 17:21:10.759 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:10 np0005601226 podman[254184]: 2026-01-29 17:21:10.875860162 +0000 UTC m=+0.046011009 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:21:10 np0005601226 nova_compute[239456]: 2026-01-29 17:21:10.881 239460 DEBUG nova.compute.manager [None req-3219a5fd-8198-4846-9681-72a506326bab - - - - - -] [instance: d64d6fd1-4f7b-4765-8b1c-1b7e6d42c455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:21:10 np0005601226 podman[254185]: 2026-01-29 17:21:10.896668257 +0000 UTC m=+0.066710771 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:21:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 29 12:21:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 29 12:21:10 np0005601226 nova_compute[239456]: 2026-01-29 17:21:10.982 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.2 KiB/s wr, 124 op/s
Jan 29 12:21:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/977924743' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/977924743' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 29 12:21:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 29 12:21:13 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 29 12:21:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.6 KiB/s wr, 52 op/s
Jan 29 12:21:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 29 12:21:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 29 12:21:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 29 12:21:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 60 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 1.0 MiB/s wr, 199 op/s
Jan 29 12:21:15 np0005601226 nova_compute[239456]: 2026-01-29 17:21:15.761 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:21:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3796531920' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:21:15 np0005601226 nova_compute[239456]: 2026-01-29 17:21:15.982 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 29 12:21:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 29 12:21:16 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 29 12:21:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 60 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 890 KiB/s wr, 170 op/s
Jan 29 12:21:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 29 12:21:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 29 12:21:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 29 12:21:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1418322288' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1418322288' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/998225624' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/998225624' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:18.996780) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707278996839, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2306, "num_deletes": 262, "total_data_size": 3548341, "memory_usage": 3601760, "flush_reason": "Manual Compaction"}
Jan 29 12:21:18 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707279106292, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3437836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21448, "largest_seqno": 23753, "table_properties": {"data_size": 3427102, "index_size": 6972, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22498, "raw_average_key_size": 21, "raw_value_size": 3405504, "raw_average_value_size": 3185, "num_data_blocks": 307, "num_entries": 1069, "num_filter_entries": 1069, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707110, "oldest_key_time": 1769707110, "file_creation_time": 1769707278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 109554 microseconds, and 5850 cpu microseconds.
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.106335) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3437836 bytes OK
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.106351) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.115815) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.115858) EVENT_LOG_v1 {"time_micros": 1769707279115850, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.115879) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3538477, prev total WAL file size 3538477, number of live WAL files 2.
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.116896) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3357KB)], [50(7974KB)]
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707279116948, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11603920, "oldest_snapshot_seqno": -1}
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5141 keys, 9814512 bytes, temperature: kUnknown
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707279273373, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9814512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9776300, "index_size": 24278, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 126688, "raw_average_key_size": 24, "raw_value_size": 9679812, "raw_average_value_size": 1882, "num_data_blocks": 1005, "num_entries": 5141, "num_filter_entries": 5141, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707279, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.273834) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9814512 bytes
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.284035) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.2 rd, 62.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 5672, records dropped: 531 output_compression: NoCompression
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.284059) EVENT_LOG_v1 {"time_micros": 1769707279284049, "job": 26, "event": "compaction_finished", "compaction_time_micros": 156487, "compaction_time_cpu_micros": 17466, "output_level": 6, "num_output_files": 1, "total_output_size": 9814512, "num_input_records": 5672, "num_output_records": 5141, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707279284683, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707279285493, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.116787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.285545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.285550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.285552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.285553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:21:19.285555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:21:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 107 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 150 op/s
Jan 29 12:21:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 29 12:21:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 29 12:21:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 29 12:21:20 np0005601226 nova_compute[239456]: 2026-01-29 17:21:20.766 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:20 np0005601226 nova_compute[239456]: 2026-01-29 17:21:20.985 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:21:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4185694372' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:21:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 107 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 158 op/s
Jan 29 12:21:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 29 12:21:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 29 12:21:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 29 12:21:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:21:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/656820934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:21:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 29 12:21:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 29 12:21:23 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 29 12:21:23 np0005601226 nova_compute[239456]: 2026-01-29 17:21:23.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:23 np0005601226 nova_compute[239456]: 2026-01-29 17:21:23.613 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:23 np0005601226 nova_compute[239456]: 2026-01-29 17:21:23.640 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:21:23 np0005601226 nova_compute[239456]: 2026-01-29 17:21:23.641 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:21:23 np0005601226 nova_compute[239456]: 2026-01-29 17:21:23.641 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:21:23 np0005601226 nova_compute[239456]: 2026-01-29 17:21:23.641 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:21:23 np0005601226 nova_compute[239456]: 2026-01-29 17:21:23.641 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:21:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 120 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.1 MiB/s wr, 134 op/s
Jan 29 12:21:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:21:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1023997652' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.585 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.943s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.718 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.719 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4653MB free_disk=59.988245147280395GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.719 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.720 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.780 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.781 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.801 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing inventories for resource provider 79259295-532c-4a51-8f50-027529735b0c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.826 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating ProviderTree inventory for provider 79259295-532c-4a51-8f50-027529735b0c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.827 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.855 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing aggregate associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.881 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing trait associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, traits: HW_CPU_X86_SSE4A,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 29 12:21:24 np0005601226 nova_compute[239456]: 2026-01-29 17:21:24.895 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:21:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:21:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2048542840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:21:25 np0005601226 nova_compute[239456]: 2026-01-29 17:21:25.406 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:21:25 np0005601226 nova_compute[239456]: 2026-01-29 17:21:25.410 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:21:25 np0005601226 nova_compute[239456]: 2026-01-29 17:21:25.428 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:21:25 np0005601226 nova_compute[239456]: 2026-01-29 17:21:25.448 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:21:25 np0005601226 nova_compute[239456]: 2026-01-29 17:21:25.448 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:21:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 29 12:21:25 np0005601226 nova_compute[239456]: 2026-01-29 17:21:25.770 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 29 12:21:25 np0005601226 nova_compute[239456]: 2026-01-29 17:21:25.987 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 29 12:21:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 29 12:21:27 np0005601226 nova_compute[239456]: 2026-01-29 17:21:27.439 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:27 np0005601226 nova_compute[239456]: 2026-01-29 17:21:27.440 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:21:27 np0005601226 nova_compute[239456]: 2026-01-29 17:21:27.440 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:21:27 np0005601226 nova_compute[239456]: 2026-01-29 17:21:27.464 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:21:27 np0005601226 nova_compute[239456]: 2026-01-29 17:21:27.464 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 29 12:21:27 np0005601226 nova_compute[239456]: 2026-01-29 17:21:27.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:27 np0005601226 nova_compute[239456]: 2026-01-29 17:21:27.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 134 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 29 12:21:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:21:27Z|00079|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 29 12:21:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 29 12:21:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 29 12:21:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:28 np0005601226 nova_compute[239456]: 2026-01-29 17:21:28.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:29 np0005601226 nova_compute[239456]: 2026-01-29 17:21:29.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:29 np0005601226 nova_compute[239456]: 2026-01-29 17:21:29.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:21:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 180 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 141 op/s
Jan 29 12:21:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 29 12:21:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 29 12:21:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 29 12:21:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935856554' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935856554' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:30 np0005601226 nova_compute[239456]: 2026-01-29 17:21:30.773 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:30 np0005601226 nova_compute[239456]: 2026-01-29 17:21:30.990 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 180 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 495 KiB/s rd, 3.5 MiB/s wr, 86 op/s
Jan 29 12:21:32 np0005601226 nova_compute[239456]: 2026-01-29 17:21:32.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/461795353' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/461795353' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 29 12:21:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 168 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 521 KiB/s rd, 3.5 MiB/s wr, 128 op/s
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951231671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2951231671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:33 np0005601226 nova_compute[239456]: 2026-01-29 17:21:33.965 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:33 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:21:33.964 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:21:33 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:21:33.965 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:21:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 29 12:21:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 29 12:21:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 29 12:21:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2248313754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2248313754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 134 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 3.0 KiB/s wr, 141 op/s
Jan 29 12:21:35 np0005601226 nova_compute[239456]: 2026-01-29 17:21:35.776 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:21:35.967 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:21:35 np0005601226 nova_compute[239456]: 2026-01-29 17:21:35.990 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 29 12:21:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 29 12:21:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 29 12:21:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/780747386' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/780747386' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 134 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 3.0 KiB/s wr, 141 op/s
Jan 29 12:21:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 29 12:21:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 29 12:21:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 29 12:21:38 np0005601226 nova_compute[239456]: 2026-01-29 17:21:38.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:21:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 8.8 KiB/s wr, 314 op/s
Jan 29 12:21:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:21:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1672784521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:21:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:21:40.283 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:21:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:21:40.284 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:21:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:21:40.284 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:21:40
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes']
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:21:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:21:40 np0005601226 nova_compute[239456]: 2026-01-29 17:21:40.780 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:40 np0005601226 nova_compute[239456]: 2026-01-29 17:21:40.990 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 29 12:21:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 29 12:21:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 29 12:21:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 5.8 KiB/s wr, 214 op/s
Jan 29 12:21:41 np0005601226 podman[254272]: 2026-01-29 17:21:41.936985884 +0000 UTC m=+0.109653054 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 29 12:21:41 np0005601226 podman[254273]: 2026-01-29 17:21:41.953539754 +0000 UTC m=+0.124666452 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:21:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 29 12:21:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 29 12:21:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 29 12:21:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 29 12:21:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 29 12:21:43 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 29 12:21:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 5.5 KiB/s wr, 140 op/s
Jan 29 12:21:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.7 KiB/s wr, 31 op/s
Jan 29 12:21:45 np0005601226 nova_compute[239456]: 2026-01-29 17:21:45.784 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:45 np0005601226 nova_compute[239456]: 2026-01-29 17:21:45.992 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:21:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/833919019' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:21:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 29 12:21:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.7 KiB/s wr, 31 op/s
Jan 29 12:21:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 29 12:21:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 29 12:21:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 5.1 KiB/s wr, 82 op/s
Jan 29 12:21:50 np0005601226 nova_compute[239456]: 2026-01-29 17:21:50.832 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:50 np0005601226 nova_compute[239456]: 2026-01-29 17:21:50.994 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.019847889993545e-07 of space, bias 1.0, pg target 0.00018059543669980634 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.544240634702353e-06 of space, bias 1.0, pg target 0.002563272190410706 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.812868728732088e-07 of space, bias 1.0, pg target 0.00020438606186196263 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660442609390167 of space, bias 1.0, pg target 0.199813278281705 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4154521517124214e-06 of space, bias 4.0, pg target 0.0016985425820549057 quantized to 16 (current 16)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:21:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.9 KiB/s wr, 66 op/s
Jan 29 12:21:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3258998901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3258998901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034137687' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3034137687' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 29 12:21:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 29 12:21:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 29 12:21:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 52 op/s
Jan 29 12:21:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.7 KiB/s wr, 110 op/s
Jan 29 12:21:55 np0005601226 nova_compute[239456]: 2026-01-29 17:21:55.835 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:55 np0005601226 nova_compute[239456]: 2026-01-29 17:21:55.993 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:21:55 np0005601226 nova_compute[239456]: 2026-01-29 17:21:55.993 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:21:55 np0005601226 nova_compute[239456]: 2026-01-29 17:21:55.995 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.009 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.086 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.086 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.096 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.096 239460 INFO nova.compute.claims [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031740240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031740240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.199 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2226199055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2226199055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:21:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2343711216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.700 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.706 239460 DEBUG nova.compute.provider_tree [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.722 239460 DEBUG nova.scheduler.client.report [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.743 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.743 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.784 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.784 239460 DEBUG nova.network.neutron [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.802 239460 INFO nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.820 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.917 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.918 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.918 239460 INFO nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Creating image(s)#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.935 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] rbd image 656165e5-9250-4055-8194-45e769830100_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.955 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] rbd image 656165e5-9250-4055-8194-45e769830100_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.973 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] rbd image 656165e5-9250-4055-8194-45e769830100_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:21:56 np0005601226 nova_compute[239456]: 2026-01-29 17:21:56.976 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.022 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.023 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.023 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.023 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.041 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] rbd image 656165e5-9250-4055-8194-45e769830100_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.044 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 656165e5-9250-4055-8194-45e769830100_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.192 239460 DEBUG nova.policy [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '74a4d39ed5f246a285b523d04bd13f4f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6e0cefcde775417f910c6b8d8982c845', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:21:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 29 12:21:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 29 12:21:57 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 29 12:21:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 41 MiB data, 235 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.4 KiB/s wr, 63 op/s
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.939 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 656165e5-9250-4055-8194-45e769830100_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.895s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:21:57 np0005601226 nova_compute[239456]: 2026-01-29 17:21:57.992 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] resizing rbd image 656165e5-9250-4055-8194-45e769830100_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:21:58 np0005601226 nova_compute[239456]: 2026-01-29 17:21:58.270 239460 DEBUG nova.objects.instance [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lazy-loading 'migration_context' on Instance uuid 656165e5-9250-4055-8194-45e769830100 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:21:58 np0005601226 nova_compute[239456]: 2026-01-29 17:21:58.289 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:21:58 np0005601226 nova_compute[239456]: 2026-01-29 17:21:58.290 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Ensure instance console log exists: /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:21:58 np0005601226 nova_compute[239456]: 2026-01-29 17:21:58.290 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:21:58 np0005601226 nova_compute[239456]: 2026-01-29 17:21:58.291 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:21:58 np0005601226 nova_compute[239456]: 2026-01-29 17:21:58.291 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:21:58 np0005601226 nova_compute[239456]: 2026-01-29 17:21:58.396 239460 DEBUG nova.network.neutron [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Successfully created port: f793e3fd-9b6a-4e49-af85-bae055fa6d70 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/956379371' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:21:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/956379371' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.195 239460 DEBUG nova.network.neutron [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Successfully updated port: f793e3fd-9b6a-4e49-af85-bae055fa6d70 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.214 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.214 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquired lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.215 239460 DEBUG nova.network.neutron [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.292 239460 DEBUG nova.compute.manager [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-changed-f793e3fd-9b6a-4e49-af85-bae055fa6d70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.292 239460 DEBUG nova.compute.manager [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Refreshing instance network info cache due to event network-changed-f793e3fd-9b6a-4e49-af85-bae055fa6d70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.292 239460 DEBUG oslo_concurrency.lockutils [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:21:59 np0005601226 nova_compute[239456]: 2026-01-29 17:21:59.378 239460 DEBUG nova.network.neutron [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:21:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 29 12:21:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 29 12:21:59 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 29 12:21:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 3.6 MiB/s wr, 219 op/s
Jan 29 12:22:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:22:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893529922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:22:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:22:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893529922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.407 239460 DEBUG nova.network.neutron [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Updating instance_info_cache with network_info: [{"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.432 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Releasing lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.432 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Instance network_info: |[{"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.433 239460 DEBUG oslo_concurrency.lockutils [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.433 239460 DEBUG nova.network.neutron [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Refreshing network info cache for port f793e3fd-9b6a-4e49-af85-bae055fa6d70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.435 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Start _get_guest_xml network_info=[{"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.439 239460 WARNING nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.443 239460 DEBUG nova.virt.libvirt.host [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.444 239460 DEBUG nova.virt.libvirt.host [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.448 239460 DEBUG nova.virt.libvirt.host [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.449 239460 DEBUG nova.virt.libvirt.host [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.449 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.450 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.450 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.451 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.451 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.451 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.451 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.452 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.452 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.452 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.452 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.453 239460 DEBUG nova.virt.hardware [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.456 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.881 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225773636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:00 np0005601226 nova_compute[239456]: 2026-01-29 17:22:00.986 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.007 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] rbd image 656165e5-9250-4055-8194-45e769830100_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.011 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.023 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968428534' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.582 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.584 239460 DEBUG nova.virt.libvirt.vif [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1351308516',display_name='tempest-TestEncryptedCinderVolumes-server-1351308516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1351308516',id=7,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKlBXk1QwMphZcD06R+9MU50NB47/oBF0AqKb9wOktQB9Eg8YEK5V6F73w8pFIVMo8mtRPe024h67r7d8H4sUQbGBcrztjwARD6YyUSZK3JSpktNEbwcEv2v/40+5lZUg==',key_name='tempest-keypair-1140030069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e0cefcde775417f910c6b8d8982c845',ramdisk_id='',reservation_id='r-v5ugy9g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1346500371',owner_user_name='tempest-TestEncryptedCinderVolumes-1346500371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:21:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='74a4d39ed5f246a285b523d04bd13f4f',uuid=656165e5-9250-4055-8194-45e769830100,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.584 239460 DEBUG nova.network.os_vif_util [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Converting VIF {"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.585 239460 DEBUG nova.network.os_vif_util [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:e0:37,bridge_name='br-int',has_traffic_filtering=True,id=f793e3fd-9b6a-4e49-af85-bae055fa6d70,network=Network(ee3f1e72-8c27-4871-b363-434386faae30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf793e3fd-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.586 239460 DEBUG nova.objects.instance [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lazy-loading 'pci_devices' on Instance uuid 656165e5-9250-4055-8194-45e769830100 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.602 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <uuid>656165e5-9250-4055-8194-45e769830100</uuid>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <name>instance-00000007</name>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1351308516</nova:name>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:22:00</nova:creationTime>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:user uuid="74a4d39ed5f246a285b523d04bd13f4f">tempest-TestEncryptedCinderVolumes-1346500371-project-member</nova:user>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:project uuid="6e0cefcde775417f910c6b8d8982c845">tempest-TestEncryptedCinderVolumes-1346500371</nova:project>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <nova:port uuid="f793e3fd-9b6a-4e49-af85-bae055fa6d70">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <entry name="serial">656165e5-9250-4055-8194-45e769830100</entry>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <entry name="uuid">656165e5-9250-4055-8194-45e769830100</entry>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/656165e5-9250-4055-8194-45e769830100_disk">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/656165e5-9250-4055-8194-45e769830100_disk.config">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:ae:e0:37"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <target dev="tapf793e3fd-9b"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/console.log" append="off"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:22:01 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:22:01 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:22:01 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:22:01 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.603 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Preparing to wait for external event network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.604 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.604 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.604 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.605 239460 DEBUG nova.virt.libvirt.vif [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1351308516',display_name='tempest-TestEncryptedCinderVolumes-server-1351308516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1351308516',id=7,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKlBXk1QwMphZcD06R+9MU50NB47/oBF0AqKb9wOktQB9Eg8YEK5V6F73w8pFIVMo8mtRPe024h67r7d8H4sUQbGBcrztjwARD6YyUSZK3JSpktNEbwcEv2v/40+5lZUg==',key_name='tempest-keypair-1140030069',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e0cefcde775417f910c6b8d8982c845',ramdisk_id='',reservation_id='r-v5ugy9g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1346500371',owner_user_name='tempest-TestEncryptedCinderVolumes-1346500371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:21:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='74a4d39ed5f246a285b523d04bd13f4f',uuid=656165e5-9250-4055-8194-45e769830100,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.605 239460 DEBUG nova.network.os_vif_util [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Converting VIF {"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.606 239460 DEBUG nova.network.os_vif_util [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:e0:37,bridge_name='br-int',has_traffic_filtering=True,id=f793e3fd-9b6a-4e49-af85-bae055fa6d70,network=Network(ee3f1e72-8c27-4871-b363-434386faae30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf793e3fd-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.606 239460 DEBUG os_vif [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:e0:37,bridge_name='br-int',has_traffic_filtering=True,id=f793e3fd-9b6a-4e49-af85-bae055fa6d70,network=Network(ee3f1e72-8c27-4871-b363-434386faae30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf793e3fd-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.606 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.607 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.607 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.610 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.610 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf793e3fd-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.610 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf793e3fd-9b, col_values=(('external_ids', {'iface-id': 'f793e3fd-9b6a-4e49-af85-bae055fa6d70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:e0:37', 'vm-uuid': '656165e5-9250-4055-8194-45e769830100'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.611 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:01 np0005601226 NetworkManager[49020]: <info>  [1769707321.6124] manager: (tapf793e3fd-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.613 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.616 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.616 239460 INFO os_vif [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:e0:37,bridge_name='br-int',has_traffic_filtering=True,id=f793e3fd-9b6a-4e49-af85-bae055fa6d70,network=Network(ee3f1e72-8c27-4871-b363-434386faae30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf793e3fd-9b')#033[00m
Jan 29 12:22:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 3.6 MiB/s wr, 142 op/s
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.812 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.812 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.813 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] No VIF found with MAC fa:16:3e:ae:e0:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.813 239460 INFO nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Using config drive#033[00m
Jan 29 12:22:01 np0005601226 nova_compute[239456]: 2026-01-29 17:22:01.884 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] rbd image 656165e5-9250-4055-8194-45e769830100_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.536 239460 DEBUG nova.network.neutron [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Updated VIF entry in instance network info cache for port f793e3fd-9b6a-4e49-af85-bae055fa6d70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.536 239460 DEBUG nova.network.neutron [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Updating instance_info_cache with network_info: [{"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.556 239460 DEBUG oslo_concurrency.lockutils [req-e68b3439-3797-49a5-8411-01a0ff17845b req-f9b160d8-0812-4c24-95e7-563ec02b5df5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.751 239460 INFO nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Creating config drive at /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/disk.config#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.756 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw64k_6yl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.873 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw64k_6yl" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.989 239460 DEBUG nova.storage.rbd_utils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] rbd image 656165e5-9250-4055-8194-45e769830100_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:02 np0005601226 nova_compute[239456]: 2026-01-29 17:22:02.992 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/disk.config 656165e5-9250-4055-8194-45e769830100_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.378 239460 DEBUG oslo_concurrency.processutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/disk.config 656165e5-9250-4055-8194-45e769830100_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.379 239460 INFO nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Deleting local config drive /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100/disk.config because it was imported into RBD.#033[00m
Jan 29 12:22:03 np0005601226 kernel: tapf793e3fd-9b: entered promiscuous mode
Jan 29 12:22:03 np0005601226 NetworkManager[49020]: <info>  [1769707323.4139] manager: (tapf793e3fd-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.415 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:03Z|00080|binding|INFO|Claiming lport f793e3fd-9b6a-4e49-af85-bae055fa6d70 for this chassis.
Jan 29 12:22:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:03Z|00081|binding|INFO|f793e3fd-9b6a-4e49-af85-bae055fa6d70: Claiming fa:16:3e:ae:e0:37 10.100.0.5
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.417 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.420 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.433 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:e0:37 10.100.0.5'], port_security=['fa:16:3e:ae:e0:37 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '656165e5-9250-4055-8194-45e769830100', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3f1e72-8c27-4871-b363-434386faae30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e0cefcde775417f910c6b8d8982c845', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4dd114a-0224-4866-9a1e-851c6913de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13543b3a-ab20-4b68-b24c-0987c63c7970, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=f793e3fd-9b6a-4e49-af85-bae055fa6d70) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.435 155625 INFO neutron.agent.ovn.metadata.agent [-] Port f793e3fd-9b6a-4e49-af85-bae055fa6d70 in datapath ee3f1e72-8c27-4871-b363-434386faae30 bound to our chassis#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.436 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee3f1e72-8c27-4871-b363-434386faae30#033[00m
Jan 29 12:22:03 np0005601226 systemd-machined[207561]: New machine qemu-7-instance-00000007.
Jan 29 12:22:03 np0005601226 systemd-udevd[254636]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.444 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[abf9a125-ca13-4265-a0c9-48046d58287a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.445 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee3f1e72-81 in ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.450 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee3f1e72-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.450 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[26089b26-8843-4aef-8132-b7638826a2c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 NetworkManager[49020]: <info>  [1769707323.4512] device (tapf793e3fd-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:22:03 np0005601226 NetworkManager[49020]: <info>  [1769707323.4523] device (tapf793e3fd-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.452 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.451 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3728039b-fec6-4e34-bae9-52f13c1a305c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:03Z|00082|binding|INFO|Setting lport f793e3fd-9b6a-4e49-af85-bae055fa6d70 ovn-installed in OVS
Jan 29 12:22:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:03Z|00083|binding|INFO|Setting lport f793e3fd-9b6a-4e49-af85-bae055fa6d70 up in Southbound
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.458 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.461 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5835a8-0e39-42fb-b0f5-461d4a1a69cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.480 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0caef9-9dc4-4f43-8c2a-5d5767ca0c26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.500 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb8b3fa-c218-4df3-a904-1773ee1e22aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 NetworkManager[49020]: <info>  [1769707323.5068] manager: (tapee3f1e72-80): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Jan 29 12:22:03 np0005601226 systemd-udevd[254639]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.506 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ad515e9e-bd66-4d87-b321-6c341f51214d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.531 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ff93f8-e778-44c8-9801-f9350bedba04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.534 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[5a093813-8eb7-4252-8ce1-db6bf5b4bff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 NetworkManager[49020]: <info>  [1769707323.5503] device (tapee3f1e72-80): carrier: link connected
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.552 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa30c79-4292-45b3-8876-ae0058717021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.565 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[69e857b2-477e-44ad-befe-90ebf1e5c805]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3f1e72-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:53:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467705, 'reachable_time': 39122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254669, 'error': None, 'target': 'ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.578 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5134b5-7fbb-4c13-b02d-68107f389546]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:534a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467705, 'tstamp': 467705}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254670, 'error': None, 'target': 'ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.594 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bfcda2ee-ccc1-44fb-b953-de0621aec5f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee3f1e72-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:53:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467705, 'reachable_time': 39122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254671, 'error': None, 'target': 'ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.619 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c27bc55a-5095-4e49-84ae-967c9240171c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 29 12:22:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.663 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1467cab7-d8bd-4d59-a6c5-0f2700baf9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.664 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3f1e72-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.664 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.665 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee3f1e72-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.666 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 kernel: tapee3f1e72-80: entered promiscuous mode
Jan 29 12:22:03 np0005601226 NetworkManager[49020]: <info>  [1769707323.6703] manager: (tapee3f1e72-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.670 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.673 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee3f1e72-80, col_values=(('external_ids', {'iface-id': '3be1df4f-37e6-4098-8309-11bc33a623dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:03Z|00084|binding|INFO|Releasing lport 3be1df4f-37e6-4098-8309-11bc33a623dc from this chassis (sb_readonly=0)
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.676 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.678 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.679 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee3f1e72-8c27-4871-b363-434386faae30.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee3f1e72-8c27-4871-b363-434386faae30.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.681 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fb45b64d-c192-4034-90a6-54d49c457a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.682 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-ee3f1e72-8c27-4871-b363-434386faae30
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/ee3f1e72-8c27-4871-b363-434386faae30.pid.haproxy
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID ee3f1e72-8c27-4871-b363-434386faae30
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.682 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:03.683 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30', 'env', 'PROCESS_TAG=haproxy-ee3f1e72-8c27-4871-b363-434386faae30', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee3f1e72-8c27-4871-b363-434386faae30.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.692 239460 DEBUG nova.compute.manager [req-7fbfbce0-8895-49b2-b00d-d2984a2a81f7 req-3d7d14c3-df5c-430e-b611-bfecd3211528 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.693 239460 DEBUG oslo_concurrency.lockutils [req-7fbfbce0-8895-49b2-b00d-d2984a2a81f7 req-3d7d14c3-df5c-430e-b611-bfecd3211528 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.693 239460 DEBUG oslo_concurrency.lockutils [req-7fbfbce0-8895-49b2-b00d-d2984a2a81f7 req-3d7d14c3-df5c-430e-b611-bfecd3211528 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.693 239460 DEBUG oslo_concurrency.lockutils [req-7fbfbce0-8895-49b2-b00d-d2984a2a81f7 req-3d7d14c3-df5c-430e-b611-bfecd3211528 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:03 np0005601226 nova_compute[239456]: 2026-01-29 17:22:03.693 239460 DEBUG nova.compute.manager [req-7fbfbce0-8895-49b2-b00d-d2984a2a81f7 req-3d7d14c3-df5c-430e-b611-bfecd3211528 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Processing event network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:22:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 88 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.034 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707324.034154, 656165e5-9250-4055-8194-45e769830100 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.035 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] VM Started (Lifecycle Event)#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.037 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.041 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.044 239460 INFO nova.virt.libvirt.driver [-] [instance: 656165e5-9250-4055-8194-45e769830100] Instance spawned successfully.#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.044 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.057 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.060 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:22:04 np0005601226 podman[254738]: 2026-01-29 17:22:03.973437787 +0000 UTC m=+0.020982090 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.070 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.071 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.071 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.071 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.072 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.072 239460 DEBUG nova.virt.libvirt.driver [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:04 np0005601226 podman[254738]: 2026-01-29 17:22:04.073050919 +0000 UTC m=+0.120595202 container create f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.079 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.080 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707324.034954, 656165e5-9250-4055-8194-45e769830100 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.080 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.113 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.117 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707324.0404449, 656165e5-9250-4055-8194-45e769830100 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.117 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.127 239460 INFO nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.128 239460 DEBUG nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.135 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.138 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:22:04 np0005601226 systemd[1]: Started libpod-conmon-f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a.scope.
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.173 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:22:04 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:04 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8985c254510df92a958a60239de6cd3d7920bc0099ed369d8fb83a5419c3ae59/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:04 np0005601226 podman[254738]: 2026-01-29 17:22:04.201095771 +0000 UTC m=+0.248640084 container init f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.201 239460 INFO nova.compute.manager [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Took 8.15 seconds to build instance.#033[00m
Jan 29 12:22:04 np0005601226 podman[254738]: 2026-01-29 17:22:04.205565272 +0000 UTC m=+0.253109555 container start f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 29 12:22:04 np0005601226 neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30[254761]: [NOTICE]   (254765) : New worker (254767) forked
Jan 29 12:22:04 np0005601226 neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30[254761]: [NOTICE]   (254765) : Loading success.
Jan 29 12:22:04 np0005601226 nova_compute[239456]: 2026-01-29 17:22:04.225 239460 DEBUG oslo_concurrency.lockutils [None req-efff07e7-4424-4e04-a0e4-c24aef46085f 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 88 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 1.1 MiB/s wr, 125 op/s
Jan 29 12:22:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 29 12:22:05 np0005601226 nova_compute[239456]: 2026-01-29 17:22:05.786 239460 DEBUG nova.compute.manager [req-52ad4076-c147-4d85-a878-99ec2814f9dd req-c77e6b70-2d4b-4f7a-bc48-d0efa29d1418 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:05 np0005601226 nova_compute[239456]: 2026-01-29 17:22:05.786 239460 DEBUG oslo_concurrency.lockutils [req-52ad4076-c147-4d85-a878-99ec2814f9dd req-c77e6b70-2d4b-4f7a-bc48-d0efa29d1418 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:05 np0005601226 nova_compute[239456]: 2026-01-29 17:22:05.787 239460 DEBUG oslo_concurrency.lockutils [req-52ad4076-c147-4d85-a878-99ec2814f9dd req-c77e6b70-2d4b-4f7a-bc48-d0efa29d1418 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:05 np0005601226 nova_compute[239456]: 2026-01-29 17:22:05.787 239460 DEBUG oslo_concurrency.lockutils [req-52ad4076-c147-4d85-a878-99ec2814f9dd req-c77e6b70-2d4b-4f7a-bc48-d0efa29d1418 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:05 np0005601226 nova_compute[239456]: 2026-01-29 17:22:05.788 239460 DEBUG nova.compute.manager [req-52ad4076-c147-4d85-a878-99ec2814f9dd req-c77e6b70-2d4b-4f7a-bc48-d0efa29d1418 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] No waiting events found dispatching network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:22:05 np0005601226 nova_compute[239456]: 2026-01-29 17:22:05.788 239460 WARNING nova.compute.manager [req-52ad4076-c147-4d85-a878-99ec2814f9dd req-c77e6b70-2d4b-4f7a-bc48-d0efa29d1418 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received unexpected event network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:22:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 29 12:22:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 29 12:22:05 np0005601226 NetworkManager[49020]: <info>  [1769707325.9605] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 29 12:22:05 np0005601226 NetworkManager[49020]: <info>  [1769707325.9613] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 29 12:22:05 np0005601226 nova_compute[239456]: 2026-01-29 17:22:05.961 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.010 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.012 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:06Z|00085|binding|INFO|Releasing lport 3be1df4f-37e6-4098-8309-11bc33a623dc from this chassis (sb_readonly=0)
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.025 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:22:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.613 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.878 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.879 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.901 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:22:06 np0005601226 podman[254918]: 2026-01-29 17:22:06.817116634 +0000 UTC m=+0.017965139 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.983 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.983 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.991 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:22:06 np0005601226 nova_compute[239456]: 2026-01-29 17:22:06.992 239460 INFO nova.compute.claims [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:22:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:22:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:22:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:22:07 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:22:07 np0005601226 podman[254918]: 2026-01-29 17:22:07.068714387 +0000 UTC m=+0.269562882 container create 90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ptolemy, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.114 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:07 np0005601226 systemd[1]: Started libpod-conmon-90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155.scope.
Jan 29 12:22:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:07 np0005601226 podman[254918]: 2026-01-29 17:22:07.294374167 +0000 UTC m=+0.495222662 container init 90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 12:22:07 np0005601226 podman[254918]: 2026-01-29 17:22:07.299747213 +0000 UTC m=+0.500595708 container start 90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:22:07 np0005601226 dazzling_ptolemy[254935]: 167 167
Jan 29 12:22:07 np0005601226 systemd[1]: libpod-90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155.scope: Deactivated successfully.
Jan 29 12:22:07 np0005601226 podman[254918]: 2026-01-29 17:22:07.323931999 +0000 UTC m=+0.524780484 container attach 90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ptolemy, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:22:07 np0005601226 podman[254918]: 2026-01-29 17:22:07.324601877 +0000 UTC m=+0.525450372 container died 90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ptolemy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:22:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d6b195d762fec74e803c31a1a6afcde5ed6163ee9fdb6cc64437ef144ea79a81-merged.mount: Deactivated successfully.
Jan 29 12:22:07 np0005601226 podman[254918]: 2026-01-29 17:22:07.488437721 +0000 UTC m=+0.689286216 container remove 90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ptolemy, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 12:22:07 np0005601226 systemd[1]: libpod-conmon-90b3f1d40306d6cddd15b6c5c667c37679945476ce2df4bed97c6109f19a6155.scope: Deactivated successfully.
Jan 29 12:22:07 np0005601226 podman[254977]: 2026-01-29 17:22:07.62260359 +0000 UTC m=+0.045507126 container create 152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_thompson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:22:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:22:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241357151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:22:07 np0005601226 podman[254977]: 2026-01-29 17:22:07.597983842 +0000 UTC m=+0.020887398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:22:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 88 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 23 KiB/s wr, 38 op/s
Jan 29 12:22:07 np0005601226 systemd[1]: Started libpod-conmon-152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f.scope.
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.722 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.731 239460 DEBUG nova.compute.provider_tree [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:22:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9ada1b970ecc413d9dcade7089c5a553324bddc1eba56ed0066eb6c6269d03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9ada1b970ecc413d9dcade7089c5a553324bddc1eba56ed0066eb6c6269d03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9ada1b970ecc413d9dcade7089c5a553324bddc1eba56ed0066eb6c6269d03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9ada1b970ecc413d9dcade7089c5a553324bddc1eba56ed0066eb6c6269d03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a9ada1b970ecc413d9dcade7089c5a553324bddc1eba56ed0066eb6c6269d03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.745 239460 DEBUG nova.scheduler.client.report [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.762 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.763 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:22:07 np0005601226 podman[254977]: 2026-01-29 17:22:07.787191753 +0000 UTC m=+0.210095309 container init 152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_thompson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:22:07 np0005601226 podman[254977]: 2026-01-29 17:22:07.794116521 +0000 UTC m=+0.217020057 container start 152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_thompson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 12:22:07 np0005601226 podman[254977]: 2026-01-29 17:22:07.804450592 +0000 UTC m=+0.227354148 container attach 152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.812 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.813 239460 DEBUG nova.network.neutron [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:22:07 np0005601226 nova_compute[239456]: 2026-01-29 17:22:07.839 239460 INFO nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.016 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.029 239460 DEBUG nova.compute.manager [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-changed-f793e3fd-9b6a-4e49-af85-bae055fa6d70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.029 239460 DEBUG nova.compute.manager [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Refreshing instance network info cache due to event network-changed-f793e3fd-9b6a-4e49-af85-bae055fa6d70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.029 239460 DEBUG oslo_concurrency.lockutils [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.030 239460 DEBUG oslo_concurrency.lockutils [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.030 239460 DEBUG nova.network.neutron [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Refreshing network info cache for port f793e3fd-9b6a-4e49-af85-bae055fa6d70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:22:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.100 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.101 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.101 239460 INFO nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Creating image(s)#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.120 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 29 12:22:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 29 12:22:08 np0005601226 relaxed_thompson[254994]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:22:08 np0005601226 relaxed_thompson[254994]: --> All data devices are unavailable
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.169 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.199 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.203 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:08 np0005601226 systemd[1]: libpod-152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f.scope: Deactivated successfully.
Jan 29 12:22:08 np0005601226 podman[254977]: 2026-01-29 17:22:08.22252536 +0000 UTC m=+0.645428896 container died 152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.267 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.267 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.268 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.268 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4a9ada1b970ecc413d9dcade7089c5a553324bddc1eba56ed0066eb6c6269d03-merged.mount: Deactivated successfully.
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.291 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.294 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.310 239460 DEBUG nova.policy [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd3463a84af564b968e67b687bc895548', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '815af3cf993b45cc8f2cdf73bf1d552c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:22:08 np0005601226 podman[254977]: 2026-01-29 17:22:08.325519204 +0000 UTC m=+0.748422740 container remove 152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_thompson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:22:08 np0005601226 systemd[1]: libpod-conmon-152d329086e65ef0c33d932d4545cefca41fe69b22dbb3f8e60458edc20fd51f.scope: Deactivated successfully.
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.605 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 29 12:22:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 29 12:22:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.761 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] resizing rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:22:08 np0005601226 podman[255213]: 2026-01-29 17:22:08.74312324 +0000 UTC m=+0.018443491 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:22:08 np0005601226 podman[255213]: 2026-01-29 17:22:08.852034074 +0000 UTC m=+0.127354295 container create d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 12:22:08 np0005601226 nova_compute[239456]: 2026-01-29 17:22:08.894 239460 DEBUG nova.network.neutron [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Successfully created port: bf0c91eb-51aa-4985-9952-a05bb97d14ab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:22:08 np0005601226 systemd[1]: Started libpod-conmon-d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4.scope.
Jan 29 12:22:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:09 np0005601226 podman[255213]: 2026-01-29 17:22:09.013658378 +0000 UTC m=+0.288978619 container init d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:22:09 np0005601226 podman[255213]: 2026-01-29 17:22:09.019231059 +0000 UTC m=+0.294551280 container start d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:22:09 np0005601226 optimistic_perlman[255250]: 167 167
Jan 29 12:22:09 np0005601226 systemd[1]: libpod-d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4.scope: Deactivated successfully.
Jan 29 12:22:09 np0005601226 conmon[255250]: conmon d88f1fd20e0f2ed792dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4.scope/container/memory.events
Jan 29 12:22:09 np0005601226 podman[255213]: 2026-01-29 17:22:09.078029213 +0000 UTC m=+0.353349464 container attach d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:22:09 np0005601226 podman[255213]: 2026-01-29 17:22:09.079248577 +0000 UTC m=+0.354568798 container died d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 12:22:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e589ab2d2d82eba99bffdb6891828d59d41ecd5679661ad4ba59ca78cb2985d5-merged.mount: Deactivated successfully.
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.144 239460 DEBUG nova.objects.instance [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'migration_context' on Instance uuid ec6929dc-4a2e-4a7f-9c40-413a310539c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.158 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.159 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Ensure instance console log exists: /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.159 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.159 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.160 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.166 239460 DEBUG nova.network.neutron [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Updated VIF entry in instance network info cache for port f793e3fd-9b6a-4e49-af85-bae055fa6d70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.166 239460 DEBUG nova.network.neutron [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Updating instance_info_cache with network_info: [{"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.184 239460 DEBUG oslo_concurrency.lockutils [req-91e9f351-4413-4745-a057-a111efeb9575 req-188f02ff-c8e5-4d06-866c-ee6f88b585a7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:09 np0005601226 podman[255213]: 2026-01-29 17:22:09.185253362 +0000 UTC m=+0.460573583 container remove d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 12:22:09 np0005601226 systemd[1]: libpod-conmon-d88f1fd20e0f2ed792dc1527aff4e8e7dcd7985b6be337b3ba3d46ff62547cb4.scope: Deactivated successfully.
Jan 29 12:22:09 np0005601226 podman[255292]: 2026-01-29 17:22:09.334430357 +0000 UTC m=+0.042463692 container create 3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tharp, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 12:22:09 np0005601226 podman[255292]: 2026-01-29 17:22:09.313166261 +0000 UTC m=+0.021199616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:22:09 np0005601226 systemd[1]: Started libpod-conmon-3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d.scope.
Jan 29 12:22:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b843f6dd0a8abd0597241b77d1c1f6c12318f210a35cbd1b1ac9517160b37fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b843f6dd0a8abd0597241b77d1c1f6c12318f210a35cbd1b1ac9517160b37fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b843f6dd0a8abd0597241b77d1c1f6c12318f210a35cbd1b1ac9517160b37fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b843f6dd0a8abd0597241b77d1c1f6c12318f210a35cbd1b1ac9517160b37fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:22:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633478316' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:22:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:22:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633478316' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:22:09 np0005601226 podman[255292]: 2026-01-29 17:22:09.449011215 +0000 UTC m=+0.157044570 container init 3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tharp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:22:09 np0005601226 podman[255292]: 2026-01-29 17:22:09.454121694 +0000 UTC m=+0.162155029 container start 3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tharp, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 12:22:09 np0005601226 podman[255292]: 2026-01-29 17:22:09.461277698 +0000 UTC m=+0.169311063 container attach 3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tharp, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.641 239460 DEBUG nova.network.neutron [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Successfully updated port: bf0c91eb-51aa-4985-9952-a05bb97d14ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.656 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.657 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquired lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.657 239460 DEBUG nova.network.neutron [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]: {
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:    "0": [
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:        {
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "devices": [
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "/dev/loop3"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            ],
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_name": "ceph_lv0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_size": "21470642176",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "name": "ceph_lv0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "tags": {
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cluster_name": "ceph",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.crush_device_class": "",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.encrypted": "0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.objectstore": "bluestore",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osd_id": "0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.type": "block",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.vdo": "0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.with_tpm": "0"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            },
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "type": "block",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "vg_name": "ceph_vg0"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:        }
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:    ],
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:    "1": [
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:        {
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "devices": [
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "/dev/loop4"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            ],
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_name": "ceph_lv1",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_size": "21470642176",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "name": "ceph_lv1",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "tags": {
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cluster_name": "ceph",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.crush_device_class": "",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.encrypted": "0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.objectstore": "bluestore",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osd_id": "1",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.type": "block",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.vdo": "0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.with_tpm": "0"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            },
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "type": "block",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "vg_name": "ceph_vg1"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:        }
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:    ],
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:    "2": [
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:        {
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "devices": [
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "/dev/loop5"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            ],
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_name": "ceph_lv2",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_size": "21470642176",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "name": "ceph_lv2",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "tags": {
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.cluster_name": "ceph",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.crush_device_class": "",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.encrypted": "0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.objectstore": "bluestore",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osd_id": "2",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.type": "block",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.vdo": "0",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:                "ceph.with_tpm": "0"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            },
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "type": "block",
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:            "vg_name": "ceph_vg2"
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:        }
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]:    ]
Jan 29 12:22:09 np0005601226 quizzical_tharp[255308]: }
Jan 29 12:22:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 125 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 273 op/s
Jan 29 12:22:09 np0005601226 systemd[1]: libpod-3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d.scope: Deactivated successfully.
Jan 29 12:22:09 np0005601226 podman[255292]: 2026-01-29 17:22:09.731624561 +0000 UTC m=+0.439657896 container died 3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 12:22:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8b843f6dd0a8abd0597241b77d1c1f6c12318f210a35cbd1b1ac9517160b37fa-merged.mount: Deactivated successfully.
Jan 29 12:22:09 np0005601226 podman[255292]: 2026-01-29 17:22:09.78065498 +0000 UTC m=+0.488688315 container remove 3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:22:09 np0005601226 systemd[1]: libpod-conmon-3b809aa648b8d187b1352b6a2eeba0382630db6ad469f922f00e96f53287de9d.scope: Deactivated successfully.
Jan 29 12:22:09 np0005601226 nova_compute[239456]: 2026-01-29 17:22:09.907 239460 DEBUG nova.network.neutron [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.133 239460 DEBUG nova.compute.manager [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-changed-bf0c91eb-51aa-4985-9952-a05bb97d14ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.133 239460 DEBUG nova.compute.manager [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Refreshing instance network info cache due to event network-changed-bf0c91eb-51aa-4985-9952-a05bb97d14ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.134 239460 DEBUG oslo_concurrency.lockutils [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:22:10 np0005601226 podman[255392]: 2026-01-29 17:22:10.19358504 +0000 UTC m=+0.046210014 container create 5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:22:10 np0005601226 systemd[1]: Started libpod-conmon-5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a.scope.
Jan 29 12:22:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:10 np0005601226 podman[255392]: 2026-01-29 17:22:10.167254566 +0000 UTC m=+0.019879570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:22:10 np0005601226 podman[255392]: 2026-01-29 17:22:10.274398702 +0000 UTC m=+0.127023706 container init 5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:22:10 np0005601226 podman[255392]: 2026-01-29 17:22:10.280037035 +0000 UTC m=+0.132662009 container start 5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_aryabhata, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:22:10 np0005601226 podman[255392]: 2026-01-29 17:22:10.283632002 +0000 UTC m=+0.136256976 container attach 5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 12:22:10 np0005601226 quirky_aryabhata[255409]: 167 167
Jan 29 12:22:10 np0005601226 systemd[1]: libpod-5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a.scope: Deactivated successfully.
Jan 29 12:22:10 np0005601226 conmon[255409]: conmon 5a09777011af6e33243f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a.scope/container/memory.events
Jan 29 12:22:10 np0005601226 podman[255392]: 2026-01-29 17:22:10.285013009 +0000 UTC m=+0.137637983 container died 5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_aryabhata, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:22:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c67bb522faef4ac5df0af8f4345093c2ecf8f7f8873a5fa8374a35a5bc60a4a4-merged.mount: Deactivated successfully.
Jan 29 12:22:10 np0005601226 podman[255392]: 2026-01-29 17:22:10.322226329 +0000 UTC m=+0.174851303 container remove 5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:22:10 np0005601226 systemd[1]: libpod-conmon-5a09777011af6e33243ff75573e9c2779187e7a0f0e75d996ac1a71a9af9680a.scope: Deactivated successfully.
Jan 29 12:22:10 np0005601226 podman[255432]: 2026-01-29 17:22:10.48192329 +0000 UTC m=+0.045955477 container create 8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_black, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:22:10 np0005601226 systemd[1]: Started libpod-conmon-8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b.scope.
Jan 29 12:22:10 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201d8911e48235b19c0592bcd360a17d19b2db9567aa9b84c3a6d928a9cf6e6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201d8911e48235b19c0592bcd360a17d19b2db9567aa9b84c3a6d928a9cf6e6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201d8911e48235b19c0592bcd360a17d19b2db9567aa9b84c3a6d928a9cf6e6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:10 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/201d8911e48235b19c0592bcd360a17d19b2db9567aa9b84c3a6d928a9cf6e6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:10 np0005601226 podman[255432]: 2026-01-29 17:22:10.457722924 +0000 UTC m=+0.021755131 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:22:10 np0005601226 podman[255432]: 2026-01-29 17:22:10.558505748 +0000 UTC m=+0.122537945 container init 8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_black, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:22:10 np0005601226 podman[255432]: 2026-01-29 17:22:10.564216463 +0000 UTC m=+0.128248650 container start 8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_black, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:22:10 np0005601226 podman[255432]: 2026-01-29 17:22:10.567605034 +0000 UTC m=+0.131637251 container attach 8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_black, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 12:22:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:22:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:22:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:22:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:22:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:22:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.594 239460 DEBUG nova.network.neutron [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Updating instance_info_cache with network_info: [{"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.612 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Releasing lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.613 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Instance network_info: |[{"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.613 239460 DEBUG oslo_concurrency.lockutils [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.614 239460 DEBUG nova.network.neutron [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Refreshing network info cache for port bf0c91eb-51aa-4985-9952-a05bb97d14ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.618 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Start _get_guest_xml network_info=[{"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.623 239460 WARNING nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.629 239460 DEBUG nova.virt.libvirt.host [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.630 239460 DEBUG nova.virt.libvirt.host [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.638 239460 DEBUG nova.virt.libvirt.host [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.639 239460 DEBUG nova.virt.libvirt.host [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.639 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.640 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.640 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.641 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.641 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.641 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.642 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.642 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.642 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.642 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.643 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.643 239460 DEBUG nova.virt.hardware [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:22:10 np0005601226 nova_compute[239456]: 2026-01-29 17:22:10.647 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.013 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:11 np0005601226 lvm[255544]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:22:11 np0005601226 lvm[255544]: VG ceph_vg0 finished
Jan 29 12:22:11 np0005601226 lvm[255547]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:22:11 np0005601226 lvm[255547]: VG ceph_vg1 finished
Jan 29 12:22:11 np0005601226 lvm[255549]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:22:11 np0005601226 lvm[255549]: VG ceph_vg2 finished
Jan 29 12:22:11 np0005601226 lvm[255550]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:22:11 np0005601226 lvm[255550]: VG ceph_vg0 finished
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316649074' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:11 np0005601226 great_black[255448]: {}
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.225 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:11 np0005601226 systemd[1]: libpod-8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b.scope: Deactivated successfully.
Jan 29 12:22:11 np0005601226 podman[255432]: 2026-01-29 17:22:11.254378471 +0000 UTC m=+0.818410658 container died 8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 12:22:11 np0005601226 systemd[1]: libpod-8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b.scope: Consumed 1.003s CPU time.
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.255 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.271 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay-201d8911e48235b19c0592bcd360a17d19b2db9567aa9b84c3a6d928a9cf6e6e-merged.mount: Deactivated successfully.
Jan 29 12:22:11 np0005601226 podman[255432]: 2026-01-29 17:22:11.3118447 +0000 UTC m=+0.875876887 container remove 8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:22:11 np0005601226 systemd[1]: libpod-conmon-8e5e3592c479bc287fa1131959e3b76c8513951d2e946b469074ae7359e5a72b.scope: Deactivated successfully.
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.615 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 125 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.6 MiB/s wr, 234 op/s
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207600403' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.851 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.852 239460 DEBUG nova.virt.libvirt.vif [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1982204810',display_name='tempest-VolumesBackupsTest-instance-1982204810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1982204810',id=8,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFig0RQ/1n7VfpfiFNE7GVFgBviP+LoGnNX+IrMAXhnF2bhHyaJz7sbYMOMXONeJP7S+Y3ZggjQCfeRI5OI3KuMILvXKYWprzYr93gmRI1/mhd/h9dDbo0WiH0640Qup6w==',key_name='tempest-keypair-528150633',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='815af3cf993b45cc8f2cdf73bf1d552c',ramdisk_id='',reservation_id='r-xbbmdpvu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-2142983406',owner_user_name='tempest-VolumesBackupsTest-2142983406-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3463a84af564b968e67b687bc895548',uuid=ec6929dc-4a2e-4a7f-9c40-413a310539c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.853 239460 DEBUG nova.network.os_vif_util [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converting VIF {"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.853 239460 DEBUG nova.network.os_vif_util [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2f:be,bridge_name='br-int',has_traffic_filtering=True,id=bf0c91eb-51aa-4985-9952-a05bb97d14ab,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0c91eb-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.855 239460 DEBUG nova.objects.instance [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'pci_devices' on Instance uuid ec6929dc-4a2e-4a7f-9c40-413a310539c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.870 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <uuid>ec6929dc-4a2e-4a7f-9c40-413a310539c6</uuid>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <name>instance-00000008</name>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesBackupsTest-instance-1982204810</nova:name>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:22:10</nova:creationTime>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:user uuid="d3463a84af564b968e67b687bc895548">tempest-VolumesBackupsTest-2142983406-project-member</nova:user>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:project uuid="815af3cf993b45cc8f2cdf73bf1d552c">tempest-VolumesBackupsTest-2142983406</nova:project>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <nova:port uuid="bf0c91eb-51aa-4985-9952-a05bb97d14ab">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <entry name="serial">ec6929dc-4a2e-4a7f-9c40-413a310539c6</entry>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <entry name="uuid">ec6929dc-4a2e-4a7f-9c40-413a310539c6</entry>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk.config">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:0f:2f:be"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <target dev="tapbf0c91eb-51"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/console.log" append="off"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:22:11 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:22:11 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:22:11 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:22:11 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.871 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Preparing to wait for external event network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.871 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.872 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.872 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.872 239460 DEBUG nova.virt.libvirt.vif [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1982204810',display_name='tempest-VolumesBackupsTest-instance-1982204810',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1982204810',id=8,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFig0RQ/1n7VfpfiFNE7GVFgBviP+LoGnNX+IrMAXhnF2bhHyaJz7sbYMOMXONeJP7S+Y3ZggjQCfeRI5OI3KuMILvXKYWprzYr93gmRI1/mhd/h9dDbo0WiH0640Qup6w==',key_name='tempest-keypair-528150633',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='815af3cf993b45cc8f2cdf73bf1d552c',ramdisk_id='',reservation_id='r-xbbmdpvu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-2142983406',owner_user_name='tempest-VolumesBackupsTest-2142983406-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3463a84af564b968e67b687bc895548',uuid=ec6929dc-4a2e-4a7f-9c40-413a310539c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.873 239460 DEBUG nova.network.os_vif_util [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converting VIF {"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.873 239460 DEBUG nova.network.os_vif_util [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2f:be,bridge_name='br-int',has_traffic_filtering=True,id=bf0c91eb-51aa-4985-9952-a05bb97d14ab,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0c91eb-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.874 239460 DEBUG os_vif [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2f:be,bridge_name='br-int',has_traffic_filtering=True,id=bf0c91eb-51aa-4985-9952-a05bb97d14ab,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0c91eb-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.874 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.875 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.875 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.878 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.878 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf0c91eb-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.879 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf0c91eb-51, col_values=(('external_ids', {'iface-id': 'bf0c91eb-51aa-4985-9952-a05bb97d14ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:2f:be', 'vm-uuid': 'ec6929dc-4a2e-4a7f-9c40-413a310539c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.880 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:11 np0005601226 NetworkManager[49020]: <info>  [1769707331.8815] manager: (tapbf0c91eb-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.884 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.923 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.925 239460 INFO os_vif [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:2f:be,bridge_name='br-int',has_traffic_filtering=True,id=bf0c91eb-51aa-4985-9952-a05bb97d14ab,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0c91eb-51')#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.975 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.975 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.975 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No VIF found with MAC fa:16:3e:0f:2f:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.976 239460 INFO nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Using config drive#033[00m
Jan 29 12:22:11 np0005601226 nova_compute[239456]: 2026-01-29 17:22:11.994 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:22:12 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:22:12 np0005601226 podman[255652]: 2026-01-29 17:22:12.280491682 +0000 UTC m=+0.081947624 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:22:12 np0005601226 podman[255653]: 2026-01-29 17:22:12.295275642 +0000 UTC m=+0.097483134 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.489 239460 INFO nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Creating config drive at /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/disk.config#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.494 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2lhor4k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.610 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2lhor4k" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.642 239460 DEBUG nova.storage.rbd_utils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] rbd image ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.648 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/disk.config ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.767 239460 DEBUG oslo_concurrency.processutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/disk.config ec6929dc-4a2e-4a7f-9c40-413a310539c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.768 239460 INFO nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Deleting local config drive /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6/disk.config because it was imported into RBD.#033[00m
Jan 29 12:22:12 np0005601226 kernel: tapbf0c91eb-51: entered promiscuous mode
Jan 29 12:22:12 np0005601226 NetworkManager[49020]: <info>  [1769707332.8010] manager: (tapbf0c91eb-51): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Jan 29 12:22:12 np0005601226 systemd-udevd[255546]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.802 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:12Z|00086|binding|INFO|Claiming lport bf0c91eb-51aa-4985-9952-a05bb97d14ab for this chassis.
Jan 29 12:22:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:12Z|00087|binding|INFO|bf0c91eb-51aa-4985-9952-a05bb97d14ab: Claiming fa:16:3e:0f:2f:be 10.100.0.7
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.811 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:2f:be 10.100.0.7'], port_security=['fa:16:3e:0f:2f:be 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ec6929dc-4a2e-4a7f-9c40-413a310539c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815af3cf993b45cc8f2cdf73bf1d552c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc045e1c-80f9-47d6-8732-e3ba625b91d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ddf8c3b-2084-4923-8e76-31ca07b64cbd, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=bf0c91eb-51aa-4985-9952-a05bb97d14ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:22:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:12Z|00088|binding|INFO|Setting lport bf0c91eb-51aa-4985-9952-a05bb97d14ab ovn-installed in OVS
Jan 29 12:22:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:12Z|00089|binding|INFO|Setting lport bf0c91eb-51aa-4985-9952-a05bb97d14ab up in Southbound
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.813 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.812 155625 INFO neutron.agent.ovn.metadata.agent [-] Port bf0c91eb-51aa-4985-9952-a05bb97d14ab in datapath 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 bound to our chassis#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.814 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441#033[00m
Jan 29 12:22:12 np0005601226 NetworkManager[49020]: <info>  [1769707332.8184] device (tapbf0c91eb-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:22:12 np0005601226 NetworkManager[49020]: <info>  [1769707332.8188] device (tapbf0c91eb-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.823 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.823 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9ade71-4b01-4a8d-9050-082c1496be6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.824 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap765ab7c4-f1 in ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.826 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap765ab7c4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.826 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[11e77e48-52d7-41c9-b08f-bc2286f15345]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.827 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2fca826b-bac3-441f-adef-7e83c5926589]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.829 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.835 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[c587c6c3-b6f0-4af1-a693-45ad12fb8738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 systemd-machined[207561]: New machine qemu-8-instance-00000008.
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.846 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6f1113-0cb7-4e93-9bfe-b2b47888a302]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.871 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0a358a-f9e5-4494-abde-572d5f162ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.875 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f206c7-be7c-4d3d-a725-cad52ef44734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 NetworkManager[49020]: <info>  [1769707332.8763] manager: (tap765ab7c4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.906 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[6482f969-42f8-49bf-86c3-81bc281414f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.907 239460 DEBUG nova.network.neutron [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Updated VIF entry in instance network info cache for port bf0c91eb-51aa-4985-9952-a05bb97d14ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.908 239460 DEBUG nova.network.neutron [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Updating instance_info_cache with network_info: [{"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.908 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3953e7-0804-4c14-9600-40c3193bf453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 nova_compute[239456]: 2026-01-29 17:22:12.924 239460 DEBUG oslo_concurrency.lockutils [req-8554b9e8-944a-4eca-a719-9575d1482ca7 req-79fabd10-a067-41b4-8734-e031348bba3b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:12 np0005601226 NetworkManager[49020]: <info>  [1769707332.9266] device (tap765ab7c4-f0): carrier: link connected
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.930 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d24f48-e09c-4765-9152-34b407c987f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.943 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[07319d87-035a-4e17-876b-a7877d2c7543]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap765ab7c4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:73:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468642, 'reachable_time': 40312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255778, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.959 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7e353d38-84d8-4c03-ad1a-5d61705a36b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:73dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468642, 'tstamp': 468642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255779, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.971 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4a097714-dc42-40c8-85cf-424a2423e391]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap765ab7c4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:73:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468642, 'reachable_time': 40312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255780, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:12.990 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbfddb0-c8b4-414b-b622-6f98df7889d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.027 239460 DEBUG nova.compute.manager [req-37c7ab2a-94e4-4c5b-a3ba-c728a3a2a453 req-49cd06b2-4236-45bc-8dac-af88a12151cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.028 239460 DEBUG oslo_concurrency.lockutils [req-37c7ab2a-94e4-4c5b-a3ba-c728a3a2a453 req-49cd06b2-4236-45bc-8dac-af88a12151cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.028 239460 DEBUG oslo_concurrency.lockutils [req-37c7ab2a-94e4-4c5b-a3ba-c728a3a2a453 req-49cd06b2-4236-45bc-8dac-af88a12151cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.028 239460 DEBUG oslo_concurrency.lockutils [req-37c7ab2a-94e4-4c5b-a3ba-c728a3a2a453 req-49cd06b2-4236-45bc-8dac-af88a12151cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.028 239460 DEBUG nova.compute.manager [req-37c7ab2a-94e4-4c5b-a3ba-c728a3a2a453 req-49cd06b2-4236-45bc-8dac-af88a12151cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Processing event network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.033 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7c409dfb-5546-4032-8164-4432a83b74bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.034 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap765ab7c4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.035 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.035 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap765ab7c4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.069 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:13 np0005601226 kernel: tap765ab7c4-f0: entered promiscuous mode
Jan 29 12:22:13 np0005601226 NetworkManager[49020]: <info>  [1769707333.0701] manager: (tap765ab7c4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.072 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.074 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap765ab7c4-f0, col_values=(('external_ids', {'iface-id': '07f2e2bc-3dba-4506-9241-0e092dfbeda9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.075 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:13Z|00090|binding|INFO|Releasing lport 07f2e2bc-3dba-4506-9241-0e092dfbeda9 from this chassis (sb_readonly=0)
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.076 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.077 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.078 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5a185512-4085-485e-957b-d41931fc6a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.078 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.pid.haproxy
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:22:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:13.080 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'env', 'PROCESS_TAG=haproxy-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/765ab7c4-f6eb-4a45-8c1b-00dc61ad3441.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.081 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.366 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707333.365994, ec6929dc-4a2e-4a7f-9c40-413a310539c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.367 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] VM Started (Lifecycle Event)#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.370 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.373 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.380 239460 INFO nova.virt.libvirt.driver [-] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Instance spawned successfully.#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.381 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.385 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.388 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.402 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.403 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.403 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.404 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.404 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.404 239460 DEBUG nova.virt.libvirt.driver [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.408 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.408 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707333.366186, ec6929dc-4a2e-4a7f-9c40-413a310539c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.409 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.436 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.440 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707333.3723478, ec6929dc-4a2e-4a7f-9c40-413a310539c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.441 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:22:13 np0005601226 podman[255854]: 2026-01-29 17:22:13.447857952 +0000 UTC m=+0.080796101 container create a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.466 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.473 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.480 239460 INFO nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Took 5.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.482 239460 DEBUG nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:13 np0005601226 podman[255854]: 2026-01-29 17:22:13.39353572 +0000 UTC m=+0.026473879 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.494 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:22:13 np0005601226 systemd[1]: Started libpod-conmon-a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114.scope.
Jan 29 12:22:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:22:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f2106ffd3e8d1e83738348846eb8c9e7a753af315209680f2d4e80cc748a42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:22:13 np0005601226 podman[255854]: 2026-01-29 17:22:13.556272183 +0000 UTC m=+0.189210362 container init a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 29 12:22:13 np0005601226 podman[255854]: 2026-01-29 17:22:13.561462263 +0000 UTC m=+0.194400412 container start a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.561 239460 INFO nova.compute.manager [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Took 6.61 seconds to build instance.#033[00m
Jan 29 12:22:13 np0005601226 nova_compute[239456]: 2026-01-29 17:22:13.576 239460 DEBUG oslo_concurrency.lockutils [None req-75a1c664-5d36-4a78-9b0b-f79449573cdf d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:13 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [NOTICE]   (255872) : New worker (255874) forked
Jan 29 12:22:13 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [NOTICE]   (255872) : Loading success.
Jan 29 12:22:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 29 12:22:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 29 12:22:13 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 29 12:22:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 134 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.5 MiB/s wr, 242 op/s
Jan 29 12:22:15 np0005601226 nova_compute[239456]: 2026-01-29 17:22:15.108 239460 DEBUG nova.compute.manager [req-fef45c09-4a28-4564-a4fd-16d3a04ed082 req-d7a4a52f-73ba-4423-91f4-01f5c127a14a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:15 np0005601226 nova_compute[239456]: 2026-01-29 17:22:15.109 239460 DEBUG oslo_concurrency.lockutils [req-fef45c09-4a28-4564-a4fd-16d3a04ed082 req-d7a4a52f-73ba-4423-91f4-01f5c127a14a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:15 np0005601226 nova_compute[239456]: 2026-01-29 17:22:15.109 239460 DEBUG oslo_concurrency.lockutils [req-fef45c09-4a28-4564-a4fd-16d3a04ed082 req-d7a4a52f-73ba-4423-91f4-01f5c127a14a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:15 np0005601226 nova_compute[239456]: 2026-01-29 17:22:15.109 239460 DEBUG oslo_concurrency.lockutils [req-fef45c09-4a28-4564-a4fd-16d3a04ed082 req-d7a4a52f-73ba-4423-91f4-01f5c127a14a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:15 np0005601226 nova_compute[239456]: 2026-01-29 17:22:15.109 239460 DEBUG nova.compute.manager [req-fef45c09-4a28-4564-a4fd-16d3a04ed082 req-d7a4a52f-73ba-4423-91f4-01f5c127a14a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] No waiting events found dispatching network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:22:15 np0005601226 nova_compute[239456]: 2026-01-29 17:22:15.109 239460 WARNING nova.compute.manager [req-fef45c09-4a28-4564-a4fd-16d3a04ed082 req-d7a4a52f-73ba-4423-91f4-01f5c127a14a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received unexpected event network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab for instance with vm_state active and task_state None.#033[00m
Jan 29 12:22:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 135 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.8 MiB/s wr, 244 op/s
Jan 29 12:22:16 np0005601226 nova_compute[239456]: 2026-01-29 17:22:16.014 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:22:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271290412' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:22:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:22:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4271290412' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:22:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:16Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:e0:37 10.100.0.5
Jan 29 12:22:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:16Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:e0:37 10.100.0.5
Jan 29 12:22:16 np0005601226 nova_compute[239456]: 2026-01-29 17:22:16.882 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:17 np0005601226 nova_compute[239456]: 2026-01-29 17:22:17.176 239460 DEBUG nova.compute.manager [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-changed-bf0c91eb-51aa-4985-9952-a05bb97d14ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:17 np0005601226 nova_compute[239456]: 2026-01-29 17:22:17.176 239460 DEBUG nova.compute.manager [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Refreshing instance network info cache due to event network-changed-bf0c91eb-51aa-4985-9952-a05bb97d14ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:22:17 np0005601226 nova_compute[239456]: 2026-01-29 17:22:17.176 239460 DEBUG oslo_concurrency.lockutils [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:22:17 np0005601226 nova_compute[239456]: 2026-01-29 17:22:17.177 239460 DEBUG oslo_concurrency.lockutils [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:17 np0005601226 nova_compute[239456]: 2026-01-29 17:22:17.177 239460 DEBUG nova.network.neutron [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Refreshing network info cache for port bf0c91eb-51aa-4985-9952-a05bb97d14ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:22:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 135 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 143 op/s
Jan 29 12:22:18 np0005601226 nova_compute[239456]: 2026-01-29 17:22:18.296 239460 DEBUG nova.network.neutron [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Updated VIF entry in instance network info cache for port bf0c91eb-51aa-4985-9952-a05bb97d14ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:22:18 np0005601226 nova_compute[239456]: 2026-01-29 17:22:18.296 239460 DEBUG nova.network.neutron [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Updating instance_info_cache with network_info: [{"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:18 np0005601226 nova_compute[239456]: 2026-01-29 17:22:18.316 239460 DEBUG oslo_concurrency.lockutils [req-0ee26c01-65d3-453e-ab2c-f92685cf020f req-76dd7886-b430-4952-9302-d6917fc143d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ec6929dc-4a2e-4a7f-9c40-413a310539c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 194 op/s
Jan 29 12:22:20 np0005601226 nova_compute[239456]: 2026-01-29 17:22:20.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:21 np0005601226 nova_compute[239456]: 2026-01-29 17:22:21.016 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 194 op/s
Jan 29 12:22:21 np0005601226 nova_compute[239456]: 2026-01-29 17:22:21.884 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300161831' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.429 239460 DEBUG oslo_concurrency.lockutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.429 239460 DEBUG oslo_concurrency.lockutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.442 239460 DEBUG nova.objects.instance [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lazy-loading 'flavor' on Instance uuid 656165e5-9250-4055-8194-45e769830100 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.478 239460 DEBUG oslo_concurrency.lockutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.683 239460 DEBUG oslo_concurrency.lockutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.684 239460 DEBUG oslo_concurrency.lockutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.685 239460 INFO nova.compute.manager [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Attaching volume 9af089d5-c71f-42a0-9f21-7b8437dc5bf6 to /dev/vdb#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.819 239460 DEBUG os_brick.utils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.821 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.830 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.830 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[a6257b3c-939d-418d-848c-f0c0d2dbd6e0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.832 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.837 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.837 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c62755-d794-4ac4-b8cc-a7999836940c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.839 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.844 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.844 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[652fd301-f692-4667-9261-a11fe3d4578e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.846 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[f886b1b4-86df-4fa2-8814-a8ebf9679191]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.846 239460 DEBUG oslo_concurrency.processutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.859 239460 DEBUG oslo_concurrency.processutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "nvme version" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.861 239460 DEBUG os_brick.initiator.connectors.lightos [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.862 239460 DEBUG os_brick.initiator.connectors.lightos [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.862 239460 DEBUG os_brick.initiator.connectors.lightos [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.863 239460 DEBUG os_brick.utils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] <== get_connector_properties: return (42ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:22:22 np0005601226 nova_compute[239456]: 2026-01-29 17:22:22.863 239460 DEBUG nova.virt.block_device [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Updating existing volume attachment record: e713a993-09c8-47ab-a60b-fbd9e30a7bc7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:22:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 29 12:22:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 29 12:22:23 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 29 12:22:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4097821522' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 191 op/s
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.867 239460 DEBUG os_brick.encryptors [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Using volume encryption metadata '{'encryption_key_id': '088c15eb-1ddc-481a-a349-7250a2a71e1a', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9af089d5-c71f-42a0-9f21-7b8437dc5bf6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9af089d5-c71f-42a0-9f21-7b8437dc5bf6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '656165e5-9250-4055-8194-45e769830100', 'attached_at': '', 'detached_at': '', 'volume_id': '9af089d5-c71f-42a0-9f21-7b8437dc5bf6', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.872 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.900 239460 DEBUG barbicanclient.v1.secrets [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.900 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.924 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.924 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.945 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.946 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.965 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.965 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.987 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:23 np0005601226 nova_compute[239456]: 2026-01-29 17:22:23.988 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.015 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.016 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.050 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.051 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.072 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.073 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.095 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.096 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.117 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.118 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.149 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.149 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.174 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.175 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.193 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.194 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.217 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.217 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.234 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.234 239460 INFO barbicanclient.base [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Calculated Secrets uuid ref: secrets/088c15eb-1ddc-481a-a349-7250a2a71e1a#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.259 239460 DEBUG barbicanclient.client [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.260 239460 DEBUG nova.virt.libvirt.host [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:22:24 np0005601226 nova_compute[239456]:    <volume>9af089d5-c71f-42a0-9f21-7b8437dc5bf6</volume>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:22:24 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:22:24 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.292 239460 DEBUG nova.objects.instance [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lazy-loading 'flavor' on Instance uuid 656165e5-9250-4055-8194-45e769830100 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.320 239460 DEBUG nova.virt.libvirt.driver [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Attempting to attach volume 9af089d5-c71f-42a0-9f21-7b8437dc5bf6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:22:24 np0005601226 nova_compute[239456]: 2026-01-29 17:22:24.322 239460 DEBUG nova.virt.libvirt.guest [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-9af089d5-c71f-42a0-9f21-7b8437dc5bf6">
Jan 29 12:22:24 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:22:24 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  <serial>9af089d5-c71f-42a0-9f21-7b8437dc5bf6</serial>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  <encryption format="luks">
Jan 29 12:22:24 np0005601226 nova_compute[239456]:    <secret type="passphrase" uuid="d7be83b6-da1a-4eac-9d7a-17118a237843"/>
Jan 29 12:22:24 np0005601226 nova_compute[239456]:  </encryption>
Jan 29 12:22:24 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:22:24 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:22:25 np0005601226 nova_compute[239456]: 2026-01-29 17:22:25.620 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:25 np0005601226 nova_compute[239456]: 2026-01-29 17:22:25.691 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:25 np0005601226 nova_compute[239456]: 2026-01-29 17:22:25.692 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:25 np0005601226 nova_compute[239456]: 2026-01-29 17:22:25.693 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:25 np0005601226 nova_compute[239456]: 2026-01-29 17:22:25.693 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:22:25 np0005601226 nova_compute[239456]: 2026-01-29 17:22:25.695 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 208 op/s
Jan 29 12:22:26 np0005601226 nova_compute[239456]: 2026-01-29 17:22:26.018 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 29 12:22:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:22:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866193429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:22:26 np0005601226 nova_compute[239456]: 2026-01-29 17:22:26.295 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 29 12:22:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 29 12:22:26 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:26Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:2f:be 10.100.0.7
Jan 29 12:22:26 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:26Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:2f:be 10.100.0.7
Jan 29 12:22:26 np0005601226 nova_compute[239456]: 2026-01-29 17:22:26.887 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.022 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.023 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.023 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.026 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.026 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.050 239460 DEBUG nova.virt.libvirt.driver [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.051 239460 DEBUG nova.virt.libvirt.driver [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.051 239460 DEBUG nova.virt.libvirt.driver [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.051 239460 DEBUG nova.virt.libvirt.driver [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] No VIF found with MAC fa:16:3e:ae:e0:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.173 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.174 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4248MB free_disk=59.92175528779626GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.174 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.175 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.219 239460 DEBUG oslo_concurrency.lockutils [None req-a9c353f8-5168-4d46-aeb6-c5fb61a6d17e 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.247 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 656165e5-9250-4055-8194-45e769830100 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.248 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance ec6929dc-4a2e-4a7f-9c40-413a310539c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.248 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.248 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.301 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 28 KiB/s wr, 26 op/s
Jan 29 12:22:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:22:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552443759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.811 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.816 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.835 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.862 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:22:27 np0005601226 nova_compute[239456]: 2026-01-29 17:22:27.862 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 29 12:22:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.568 239460 DEBUG oslo_concurrency.lockutils [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.568 239460 DEBUG oslo_concurrency.lockutils [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.581 239460 INFO nova.compute.manager [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Detaching volume 9af089d5-c71f-42a0-9f21-7b8437dc5bf6#033[00m
Jan 29 12:22:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 29 12:22:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.678 239460 INFO nova.virt.block_device [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Attempting to driver detach volume 9af089d5-c71f-42a0-9f21-7b8437dc5bf6 from mountpoint /dev/vdb#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.767 239460 DEBUG os_brick.encryptors [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Using volume encryption metadata '{'encryption_key_id': '088c15eb-1ddc-481a-a349-7250a2a71e1a', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9af089d5-c71f-42a0-9f21-7b8437dc5bf6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9af089d5-c71f-42a0-9f21-7b8437dc5bf6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '656165e5-9250-4055-8194-45e769830100', 'attached_at': '', 'detached_at': '', 'volume_id': '9af089d5-c71f-42a0-9f21-7b8437dc5bf6', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.775 239460 DEBUG nova.virt.libvirt.driver [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Attempting to detach device vdb from instance 656165e5-9250-4055-8194-45e769830100 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.776 239460 DEBUG nova.virt.libvirt.guest [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-9af089d5-c71f-42a0-9f21-7b8437dc5bf6">
Jan 29 12:22:28 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <serial>9af089d5-c71f-42a0-9f21-7b8437dc5bf6</serial>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <encryption format="luks">
Jan 29 12:22:28 np0005601226 nova_compute[239456]:    <secret type="passphrase" uuid="d7be83b6-da1a-4eac-9d7a-17118a237843"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  </encryption>
Jan 29 12:22:28 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:22:28 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.835 239460 INFO nova.virt.libvirt.driver [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Successfully detached device vdb from instance 656165e5-9250-4055-8194-45e769830100 from the persistent domain config.#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.836 239460 DEBUG nova.virt.libvirt.driver [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 656165e5-9250-4055-8194-45e769830100 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.836 239460 DEBUG nova.virt.libvirt.guest [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-9af089d5-c71f-42a0-9f21-7b8437dc5bf6">
Jan 29 12:22:28 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <serial>9af089d5-c71f-42a0-9f21-7b8437dc5bf6</serial>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  <encryption format="luks">
Jan 29 12:22:28 np0005601226 nova_compute[239456]:    <secret type="passphrase" uuid="d7be83b6-da1a-4eac-9d7a-17118a237843"/>
Jan 29 12:22:28 np0005601226 nova_compute[239456]:  </encryption>
Jan 29 12:22:28 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:22:28 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.965 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707348.9651754, 656165e5-9250-4055-8194-45e769830100 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.966 239460 DEBUG nova.virt.libvirt.driver [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 656165e5-9250-4055-8194-45e769830100 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:22:28 np0005601226 nova_compute[239456]: 2026-01-29 17:22:28.968 239460 INFO nova.virt.libvirt.driver [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Successfully detached device vdb from instance 656165e5-9250-4055-8194-45e769830100 from the live domain config.#033[00m
Jan 29 12:22:29 np0005601226 nova_compute[239456]: 2026-01-29 17:22:29.108 239460 DEBUG nova.objects.instance [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lazy-loading 'flavor' on Instance uuid 656165e5-9250-4055-8194-45e769830100 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:29 np0005601226 nova_compute[239456]: 2026-01-29 17:22:29.147 239460 DEBUG oslo_concurrency.lockutils [None req-88d6783b-4f07-42c9-ba51-0392bbfc9e2c 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:22:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3431225165' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:22:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:22:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3431225165' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:22:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 649 KiB/s rd, 4.3 MiB/s wr, 222 op/s
Jan 29 12:22:29 np0005601226 nova_compute[239456]: 2026-01-29 17:22:29.846 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:29 np0005601226 nova_compute[239456]: 2026-01-29 17:22:29.847 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:29 np0005601226 nova_compute[239456]: 2026-01-29 17:22:29.847 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:22:29 np0005601226 nova_compute[239456]: 2026-01-29 17:22:29.847 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.278 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.279 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.279 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.279 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.279 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.280 239460 INFO nova.compute.manager [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Terminating instance#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.281 239460 DEBUG nova.compute.manager [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.299 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.299 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.299 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.300 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 656165e5-9250-4055-8194-45e769830100 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:22:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3095714672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:22:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:22:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3095714672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:22:30 np0005601226 kernel: tapf793e3fd-9b (unregistering): left promiscuous mode
Jan 29 12:22:30 np0005601226 NetworkManager[49020]: <info>  [1769707350.7360] device (tapf793e3fd-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:22:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:30Z|00091|binding|INFO|Releasing lport f793e3fd-9b6a-4e49-af85-bae055fa6d70 from this chassis (sb_readonly=0)
Jan 29 12:22:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:30Z|00092|binding|INFO|Setting lport f793e3fd-9b6a-4e49-af85-bae055fa6d70 down in Southbound
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.779 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:30Z|00093|binding|INFO|Removing iface tapf793e3fd-9b ovn-installed in OVS
Jan 29 12:22:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:30.785 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:e0:37 10.100.0.5'], port_security=['fa:16:3e:ae:e0:37 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '656165e5-9250-4055-8194-45e769830100', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee3f1e72-8c27-4871-b363-434386faae30', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e0cefcde775417f910c6b8d8982c845', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4dd114a-0224-4866-9a1e-851c6913de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13543b3a-ab20-4b68-b24c-0987c63c7970, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=f793e3fd-9b6a-4e49-af85-bae055fa6d70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:22:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:30.787 155625 INFO neutron.agent.ovn.metadata.agent [-] Port f793e3fd-9b6a-4e49-af85-bae055fa6d70 in datapath ee3f1e72-8c27-4871-b363-434386faae30 unbound from our chassis#033[00m
Jan 29 12:22:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:30.789 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee3f1e72-8c27-4871-b363-434386faae30, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.789 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:30.790 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3aae1e2f-edf6-46ad-af4e-740930708c59]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:30.790 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30 namespace which is not needed anymore#033[00m
Jan 29 12:22:30 np0005601226 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 29 12:22:30 np0005601226 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 14.923s CPU time.
Jan 29 12:22:30 np0005601226 systemd-machined[207561]: Machine qemu-7-instance-00000007 terminated.
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.898 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.902 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.914 239460 INFO nova.virt.libvirt.driver [-] [instance: 656165e5-9250-4055-8194-45e769830100] Instance destroyed successfully.#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.915 239460 DEBUG nova.objects.instance [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lazy-loading 'resources' on Instance uuid 656165e5-9250-4055-8194-45e769830100 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.927 239460 DEBUG nova.virt.libvirt.vif [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1351308516',display_name='tempest-TestEncryptedCinderVolumes-server-1351308516',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1351308516',id=7,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPKlBXk1QwMphZcD06R+9MU50NB47/oBF0AqKb9wOktQB9Eg8YEK5V6F73w8pFIVMo8mtRPe024h67r7d8H4sUQbGBcrztjwARD6YyUSZK3JSpktNEbwcEv2v/40+5lZUg==',key_name='tempest-keypair-1140030069',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:22:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e0cefcde775417f910c6b8d8982c845',ramdisk_id='',reservation_id='r-v5ugy9g0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1346500371',owner_user_name='tempest-TestEncryptedCinderVolumes-1346500371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:22:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='74a4d39ed5f246a285b523d04bd13f4f',uuid=656165e5-9250-4055-8194-45e769830100,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.927 239460 DEBUG nova.network.os_vif_util [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Converting VIF {"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.928 239460 DEBUG nova.network.os_vif_util [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:e0:37,bridge_name='br-int',has_traffic_filtering=True,id=f793e3fd-9b6a-4e49-af85-bae055fa6d70,network=Network(ee3f1e72-8c27-4871-b363-434386faae30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf793e3fd-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.928 239460 DEBUG os_vif [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:e0:37,bridge_name='br-int',has_traffic_filtering=True,id=f793e3fd-9b6a-4e49-af85-bae055fa6d70,network=Network(ee3f1e72-8c27-4871-b363-434386faae30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf793e3fd-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.930 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.930 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf793e3fd-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.931 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.933 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:30 np0005601226 nova_compute[239456]: 2026-01-29 17:22:30.935 239460 INFO os_vif [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:e0:37,bridge_name='br-int',has_traffic_filtering=True,id=f793e3fd-9b6a-4e49-af85-bae055fa6d70,network=Network(ee3f1e72-8c27-4871-b363-434386faae30),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf793e3fd-9b')#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.020 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:31 np0005601226 neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30[254761]: [NOTICE]   (254765) : haproxy version is 2.8.14-c23fe91
Jan 29 12:22:31 np0005601226 neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30[254761]: [NOTICE]   (254765) : path to executable is /usr/sbin/haproxy
Jan 29 12:22:31 np0005601226 neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30[254761]: [WARNING]  (254765) : Exiting Master process...
Jan 29 12:22:31 np0005601226 neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30[254761]: [ALERT]    (254765) : Current worker (254767) exited with code 143 (Terminated)
Jan 29 12:22:31 np0005601226 neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30[254761]: [WARNING]  (254765) : All workers exited. Exiting... (0)
Jan 29 12:22:31 np0005601226 systemd[1]: libpod-f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a.scope: Deactivated successfully.
Jan 29 12:22:31 np0005601226 podman[255979]: 2026-01-29 17:22:31.045223639 +0000 UTC m=+0.192506922 container died f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.115 239460 DEBUG nova.compute.manager [req-4e6d829f-f2ac-42e3-b275-4ba80aea3180 req-a9178fdc-47aa-4512-89af-3be43b04a5cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-vif-unplugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.115 239460 DEBUG oslo_concurrency.lockutils [req-4e6d829f-f2ac-42e3-b275-4ba80aea3180 req-a9178fdc-47aa-4512-89af-3be43b04a5cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.115 239460 DEBUG oslo_concurrency.lockutils [req-4e6d829f-f2ac-42e3-b275-4ba80aea3180 req-a9178fdc-47aa-4512-89af-3be43b04a5cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.115 239460 DEBUG oslo_concurrency.lockutils [req-4e6d829f-f2ac-42e3-b275-4ba80aea3180 req-a9178fdc-47aa-4512-89af-3be43b04a5cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.115 239460 DEBUG nova.compute.manager [req-4e6d829f-f2ac-42e3-b275-4ba80aea3180 req-a9178fdc-47aa-4512-89af-3be43b04a5cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] No waiting events found dispatching network-vif-unplugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.115 239460 DEBUG nova.compute.manager [req-4e6d829f-f2ac-42e3-b275-4ba80aea3180 req-a9178fdc-47aa-4512-89af-3be43b04a5cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-vif-unplugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:22:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a-userdata-shm.mount: Deactivated successfully.
Jan 29 12:22:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8985c254510df92a958a60239de6cd3d7920bc0099ed369d8fb83a5419c3ae59-merged.mount: Deactivated successfully.
Jan 29 12:22:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 200 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 3.4 MiB/s wr, 167 op/s
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.806 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Updating instance_info_cache with network_info: [{"id": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "address": "fa:16:3e:ae:e0:37", "network": {"id": "ee3f1e72-8c27-4871-b363-434386faae30", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2027070641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e0cefcde775417f910c6b8d8982c845", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf793e3fd-9b", "ovs_interfaceid": "f793e3fd-9b6a-4e49-af85-bae055fa6d70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.826 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-656165e5-9250-4055-8194-45e769830100" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.826 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.826 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.827 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.828 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.828 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:31 np0005601226 nova_compute[239456]: 2026-01-29 17:22:31.828 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:22:31 np0005601226 podman[255979]: 2026-01-29 17:22:31.957618775 +0000 UTC m=+1.104902058 container cleanup f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:22:31 np0005601226 systemd[1]: libpod-conmon-f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a.scope: Deactivated successfully.
Jan 29 12:22:32 np0005601226 podman[256038]: 2026-01-29 17:22:32.590993424 +0000 UTC m=+0.619380500 container remove f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.595 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0686462d-86a0-4772-b874-ec7f3da80a27]: (4, ('Thu Jan 29 05:22:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30 (f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a)\nf93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a\nThu Jan 29 05:22:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30 (f93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a)\nf93f2a1c40ecf48548c2616c5ec3aa558698e7e9a98ff7fbee25f1b98a768a5a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.596 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f53557ae-cd23-4b38-9725-8ddfe74a7978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.597 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee3f1e72-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.598 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:32 np0005601226 kernel: tapee3f1e72-80: left promiscuous mode
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.605 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.607 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[91b70bec-eab1-4be3-a3c9-2e75f18d9cda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.619 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[df91c0a1-d2b5-478a-98a0-50a143c74d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.620 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[80069056-6289-43c0-998f-f3504cb068b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.622 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.622 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.622 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.631 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a0aadd-9857-4bf0-b757-f0c6287a9f98]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467700, 'reachable_time': 38091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256054, 'error': None, 'target': 'ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.632 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee3f1e72-8c27-4871-b363-434386faae30 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:22:32 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:32.633 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[01ecdd30-9fa7-4a20-bf8c-bf6fdb3d6da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:32 np0005601226 systemd[1]: run-netns-ovnmeta\x2dee3f1e72\x2d8c27\x2d4871\x2db363\x2d434386faae30.mount: Deactivated successfully.
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.771 239460 DEBUG oslo_concurrency.lockutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.771 239460 DEBUG oslo_concurrency.lockutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.784 239460 DEBUG nova.objects.instance [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'flavor' on Instance uuid ec6929dc-4a2e-4a7f-9c40-413a310539c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.807 239460 INFO nova.virt.libvirt.driver [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Ignoring supplied device name: /dev/vdb#033[00m
Jan 29 12:22:32 np0005601226 nova_compute[239456]: 2026-01-29 17:22:32.829 239460 DEBUG oslo_concurrency.lockutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.031 239460 DEBUG oslo_concurrency.lockutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.031 239460 DEBUG oslo_concurrency.lockutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.032 239460 INFO nova.compute.manager [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Attaching volume 85b108e8-43ec-4be3-9edb-af488c14f2f7 to /dev/vdb#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.186 239460 DEBUG os_brick.utils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.188 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.196 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.196 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac784ce-9114-4a77-907f-051e840e3853]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.198 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.203 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.203 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb39880-afde-44c4-9028-59b3d746876c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.204 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.208 239460 DEBUG nova.compute.manager [req-971538fd-44d1-482d-85fd-3588ff4e0684 req-ae07aa59-cbe6-4a7f-8a9c-92356653db51 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.208 239460 DEBUG oslo_concurrency.lockutils [req-971538fd-44d1-482d-85fd-3588ff4e0684 req-ae07aa59-cbe6-4a7f-8a9c-92356653db51 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "656165e5-9250-4055-8194-45e769830100-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.208 239460 DEBUG oslo_concurrency.lockutils [req-971538fd-44d1-482d-85fd-3588ff4e0684 req-ae07aa59-cbe6-4a7f-8a9c-92356653db51 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.208 239460 DEBUG oslo_concurrency.lockutils [req-971538fd-44d1-482d-85fd-3588ff4e0684 req-ae07aa59-cbe6-4a7f-8a9c-92356653db51 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "656165e5-9250-4055-8194-45e769830100-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.209 239460 DEBUG nova.compute.manager [req-971538fd-44d1-482d-85fd-3588ff4e0684 req-ae07aa59-cbe6-4a7f-8a9c-92356653db51 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] No waiting events found dispatching network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.209 239460 WARNING nova.compute.manager [req-971538fd-44d1-482d-85fd-3588ff4e0684 req-ae07aa59-cbe6-4a7f-8a9c-92356653db51 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received unexpected event network-vif-plugged-f793e3fd-9b6a-4e49-af85-bae055fa6d70 for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.209 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.209 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2917cd-5fe0-4cf9-b6a1-f4a319a1615e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.210 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4cf2fa-58a6-44d3-b1d5-c2ad23258c74]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.210 239460 DEBUG oslo_concurrency.processutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.222 239460 DEBUG oslo_concurrency.processutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "nvme version" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.224 239460 DEBUG os_brick.initiator.connectors.lightos [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.224 239460 DEBUG os_brick.initiator.connectors.lightos [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.224 239460 DEBUG os_brick.initiator.connectors.lightos [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.224 239460 DEBUG os_brick.utils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] <== get_connector_properties: return (37ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:22:33 np0005601226 nova_compute[239456]: 2026-01-29 17:22:33.225 239460 DEBUG nova.virt.block_device [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Updating existing volume attachment record: 401d4aaa-2431-4de7-a17e-b1cde392287f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:22:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 29 12:22:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 168 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 481 KiB/s rd, 3.2 MiB/s wr, 160 op/s
Jan 29 12:22:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 29 12:22:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 29 12:22:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/761483498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.029 239460 DEBUG nova.objects.instance [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'flavor' on Instance uuid ec6929dc-4a2e-4a7f-9c40-413a310539c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.053 239460 DEBUG nova.virt.libvirt.driver [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Attempting to attach volume 85b108e8-43ec-4be3-9edb-af488c14f2f7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.055 239460 DEBUG nova.virt.libvirt.guest [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:22:34 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:22:34 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-85b108e8-43ec-4be3-9edb-af488c14f2f7">
Jan 29 12:22:34 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:22:34 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:22:34 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:22:34 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:22:34 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:22:34 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:22:34 np0005601226 nova_compute[239456]:  <serial>85b108e8-43ec-4be3-9edb-af488c14f2f7</serial>
Jan 29 12:22:34 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:22:34 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:22:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:34.142 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.142 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:34.143 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.229 239460 DEBUG nova.virt.libvirt.driver [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.229 239460 DEBUG nova.virt.libvirt.driver [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.229 239460 DEBUG nova.virt.libvirt.driver [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.229 239460 DEBUG nova.virt.libvirt.driver [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] No VIF found with MAC fa:16:3e:0f:2f:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.428 239460 DEBUG oslo_concurrency.lockutils [None req-0f0a0e98-4fa1-483a-89f1-38ba90c7a97e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.634 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.950 239460 INFO nova.virt.libvirt.driver [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Deleting instance files /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100_del#033[00m
Jan 29 12:22:34 np0005601226 nova_compute[239456]: 2026-01-29 17:22:34.951 239460 INFO nova.virt.libvirt.driver [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Deletion of /var/lib/nova/instances/656165e5-9250-4055-8194-45e769830100_del complete#033[00m
Jan 29 12:22:35 np0005601226 nova_compute[239456]: 2026-01-29 17:22:35.022 239460 INFO nova.compute.manager [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Took 4.74 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:22:35 np0005601226 nova_compute[239456]: 2026-01-29 17:22:35.022 239460 DEBUG oslo.service.loopingcall [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:22:35 np0005601226 nova_compute[239456]: 2026-01-29 17:22:35.023 239460 DEBUG nova.compute.manager [-] [instance: 656165e5-9250-4055-8194-45e769830100] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:22:35 np0005601226 nova_compute[239456]: 2026-01-29 17:22:35.023 239460 DEBUG nova.network.neutron [-] [instance: 656165e5-9250-4055-8194-45e769830100] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:22:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 498 KiB/s rd, 3.2 MiB/s wr, 188 op/s
Jan 29 12:22:35 np0005601226 nova_compute[239456]: 2026-01-29 17:22:35.875 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:35 np0005601226 nova_compute[239456]: 2026-01-29 17:22:35.932 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.021 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.084 239460 DEBUG nova.network.neutron [-] [instance: 656165e5-9250-4055-8194-45e769830100] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.105 239460 INFO nova.compute.manager [-] [instance: 656165e5-9250-4055-8194-45e769830100] Took 1.08 seconds to deallocate network for instance.#033[00m
Jan 29 12:22:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2925781949' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.149 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.150 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.167 239460 DEBUG nova.compute.manager [req-b95731fe-aeac-4885-8932-f7628fa16742 req-ea25529e-5ab3-4282-badc-ccddd1bdeca9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 656165e5-9250-4055-8194-45e769830100] Received event network-vif-deleted-f793e3fd-9b6a-4e49-af85-bae055fa6d70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.202 239460 DEBUG oslo_concurrency.processutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:22:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1080410820' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.907 239460 DEBUG oslo_concurrency.processutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.705s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.912 239460 DEBUG nova.compute.provider_tree [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.926 239460 DEBUG nova.scheduler.client.report [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.951 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:36 np0005601226 nova_compute[239456]: 2026-01-29 17:22:36.979 239460 INFO nova.scheduler.client.report [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Deleted allocations for instance 656165e5-9250-4055-8194-45e769830100#033[00m
Jan 29 12:22:37 np0005601226 nova_compute[239456]: 2026-01-29 17:22:37.047 239460 DEBUG oslo_concurrency.lockutils [None req-d0701816-6bae-4fc2-91ba-3dc194d04f10 74a4d39ed5f246a285b523d04bd13f4f 6e0cefcde775417f910c6b8d8982c845 - - default default] Lock "656165e5-9250-4055-8194-45e769830100" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 121 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 387 KiB/s rd, 1.9 MiB/s wr, 153 op/s
Jan 29 12:22:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 29 12:22:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 29 12:22:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 29 12:22:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 29 12:22:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 29 12:22:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 29 12:22:39 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:39.145 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:22:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 30 KiB/s wr, 70 op/s
Jan 29 12:22:39 np0005601226 nova_compute[239456]: 2026-01-29 17:22:39.963 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 29 12:22:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 29 12:22:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 29 12:22:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:40.284 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:40.285 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:22:40.286 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:22:40
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'vms', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:22:40 np0005601226 nova_compute[239456]: 2026-01-29 17:22:40.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:22:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:22:40 np0005601226 nova_compute[239456]: 2026-01-29 17:22:40.935 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:41 np0005601226 nova_compute[239456]: 2026-01-29 17:22:41.022 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.5 KiB/s wr, 34 op/s
Jan 29 12:22:42 np0005601226 podman[256105]: 2026-01-29 17:22:42.884367372 +0000 UTC m=+0.049926985 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 29 12:22:42 np0005601226 podman[256106]: 2026-01-29 17:22:42.965192014 +0000 UTC m=+0.134006625 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:22:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1346667015' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 6.5 KiB/s wr, 43 op/s
Jan 29 12:22:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 29 12:22:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 29 12:22:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 29 12:22:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:22:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1452301816' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:22:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:22:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1452301816' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:22:45 np0005601226 nova_compute[239456]: 2026-01-29 17:22:45.017 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 29 12:22:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 29 12:22:45 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 29 12:22:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Jan 29 12:22:45 np0005601226 nova_compute[239456]: 2026-01-29 17:22:45.914 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707350.9131107, 656165e5-9250-4055-8194-45e769830100 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:22:45 np0005601226 nova_compute[239456]: 2026-01-29 17:22:45.914 239460 INFO nova.compute.manager [-] [instance: 656165e5-9250-4055-8194-45e769830100] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:22:45 np0005601226 nova_compute[239456]: 2026-01-29 17:22:45.937 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:45 np0005601226 nova_compute[239456]: 2026-01-29 17:22:45.940 239460 DEBUG nova.compute.manager [None req-bdc78186-f1ae-4d57-b85b-9a56181d5d9b - - - - - -] [instance: 656165e5-9250-4055-8194-45e769830100] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:22:46 np0005601226 nova_compute[239456]: 2026-01-29 17:22:46.023 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:47 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:47Z|00094|binding|INFO|Releasing lport 07f2e2bc-3dba-4506-9241-0e092dfbeda9 from this chassis (sb_readonly=0)
Jan 29 12:22:47 np0005601226 nova_compute[239456]: 2026-01-29 17:22:47.132 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 44 op/s
Jan 29 12:22:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 29 12:22:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 29 12:22:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 29 12:22:47 np0005601226 nova_compute[239456]: 2026-01-29 17:22:47.900 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.5 KiB/s wr, 91 op/s
Jan 29 12:22:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:22:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3808655932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:22:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:22:50Z|00095|binding|INFO|Releasing lport 07f2e2bc-3dba-4506-9241-0e092dfbeda9 from this chassis (sb_readonly=0)
Jan 29 12:22:50 np0005601226 nova_compute[239456]: 2026-01-29 17:22:50.695 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:50 np0005601226 nova_compute[239456]: 2026-01-29 17:22:50.938 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:51 np0005601226 nova_compute[239456]: 2026-01-29 17:22:51.033 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007607325947423648 of space, bias 1.0, pg target 0.22821977842270944 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.083549554656785e-06 of space, bias 1.0, pg target 0.0024250648663970355 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.19403352990175e-06 of space, bias 1.0, pg target 0.00035821005897052494 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00066602513404643 of space, bias 1.0, pg target 0.199807540213929 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.296374825300428e-06 of space, bias 4.0, pg target 0.0015556497903605137 quantized to 16 (current 16)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:22:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 29 12:22:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 29 12:22:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.1 KiB/s wr, 41 op/s
Jan 29 12:22:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 29 12:22:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 29 12:22:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 29 12:22:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.3 KiB/s wr, 46 op/s
Jan 29 12:22:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:22:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 29 12:22:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 29 12:22:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.099 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.099 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.113 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.230 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.231 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.241 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.241 239460 INFO nova.compute.claims [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.385 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 29 12:22:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Jan 29 12:22:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 29 12:22:55 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 29 12:22:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:22:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2098140898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.941 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.943 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.950 239460 DEBUG nova.compute.provider_tree [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:22:55 np0005601226 nova_compute[239456]: 2026-01-29 17:22:55.978 239460 DEBUG nova.scheduler.client.report [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.035 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.086 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.087 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.184 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.185 239460 DEBUG nova.network.neutron [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.211 239460 INFO nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.230 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.315 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.317 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.317 239460 INFO nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Creating image(s)#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.481 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.500 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.516 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.519 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.594 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.595 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.595 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.595 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.617 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.620 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:22:56 np0005601226 nova_compute[239456]: 2026-01-29 17:22:56.648 239460 DEBUG nova.policy [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e94c4027707149bebaa91488b942641b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39d4847e7fda4ce1b3f82fb1983ae222', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:22:57 np0005601226 nova_compute[239456]: 2026-01-29 17:22:57.377 239460 DEBUG nova.network.neutron [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Successfully created port: 68c13a19-1abc-4771-a498-863d2d0a28b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:22:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 121 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.259 239460 DEBUG nova.network.neutron [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Successfully updated port: 68c13a19-1abc-4771-a498-863d2d0a28b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.278 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.278 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquired lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.278 239460 DEBUG nova.network.neutron [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.372 239460 DEBUG nova.compute.manager [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-changed-68c13a19-1abc-4771-a498-863d2d0a28b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.372 239460 DEBUG nova.compute.manager [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Refreshing instance network info cache due to event network-changed-68c13a19-1abc-4771-a498-863d2d0a28b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.372 239460 DEBUG oslo_concurrency.lockutils [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.435 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.814s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.468 239460 DEBUG nova.network.neutron [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:22:58 np0005601226 nova_compute[239456]: 2026-01-29 17:22:58.507 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] resizing rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.023 239460 DEBUG nova.objects.instance [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lazy-loading 'migration_context' on Instance uuid 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.035 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.035 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Ensure instance console log exists: /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.036 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.036 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.036 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:22:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 29 12:22:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 29 12:22:59 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 29 12:22:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 163 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.9 MiB/s wr, 110 op/s
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.796 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.909 239460 DEBUG nova.network.neutron [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Updating instance_info_cache with network_info: [{"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.931 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Releasing lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.931 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Instance network_info: |[{"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.931 239460 DEBUG oslo_concurrency.lockutils [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.932 239460 DEBUG nova.network.neutron [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Refreshing network info cache for port 68c13a19-1abc-4771-a498-863d2d0a28b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.934 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Start _get_guest_xml network_info=[{"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.938 239460 WARNING nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.942 239460 DEBUG nova.virt.libvirt.host [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.943 239460 DEBUG nova.virt.libvirt.host [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.946 239460 DEBUG nova.virt.libvirt.host [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.947 239460 DEBUG nova.virt.libvirt.host [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.947 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.947 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.948 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.948 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.948 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.949 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.949 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.949 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.949 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.949 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.949 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.950 239460 DEBUG nova.virt.hardware [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:22:59 np0005601226 nova_compute[239456]: 2026-01-29 17:22:59.952 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:23:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611447923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:23:00 np0005601226 nova_compute[239456]: 2026-01-29 17:23:00.509 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:00 np0005601226 nova_compute[239456]: 2026-01-29 17:23:00.532 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:00 np0005601226 nova_compute[239456]: 2026-01-29 17:23:00.538 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:00 np0005601226 nova_compute[239456]: 2026-01-29 17:23:00.944 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:23:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788714704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.037 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.041 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.043 239460 DEBUG nova.virt.libvirt.vif [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:22:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1473413740',display_name='tempest-VolumesExtendAttachedTest-instance-1473413740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1473413740',id=9,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK4mEJSERpPIQVK3sAMeu17EWkufBq6o1JwD5SzDGHiO4Z/qUv1iUlgJH7z4vsuw0x6/IEDJafzxQjRMypF22CDgXJIieljJTYVV7/tjKuefzCG79wHpMe/YIqW+S8UZ6A==',key_name='tempest-keypair-802154011',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d4847e7fda4ce1b3f82fb1983ae222',ramdisk_id='',reservation_id='r-003wsn2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-736874132',owner_user_name='tempest-VolumesExtendAttachedTest-736874132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:22:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e94c4027707149bebaa91488b942641b',uuid=53e39297-e2d7-48cf-9623-7be3b0d6b2f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.043 239460 DEBUG nova.network.os_vif_util [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Converting VIF {"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.044 239460 DEBUG nova.network.os_vif_util [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:16:71,bridge_name='br-int',has_traffic_filtering=True,id=68c13a19-1abc-4771-a498-863d2d0a28b1,network=Network(d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68c13a19-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.045 239460 DEBUG nova.objects.instance [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lazy-loading 'pci_devices' on Instance uuid 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.074 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <uuid>53e39297-e2d7-48cf-9623-7be3b0d6b2f3</uuid>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <name>instance-00000009</name>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesExtendAttachedTest-instance-1473413740</nova:name>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:22:59</nova:creationTime>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:user uuid="e94c4027707149bebaa91488b942641b">tempest-VolumesExtendAttachedTest-736874132-project-member</nova:user>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:project uuid="39d4847e7fda4ce1b3f82fb1983ae222">tempest-VolumesExtendAttachedTest-736874132</nova:project>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <nova:port uuid="68c13a19-1abc-4771-a498-863d2d0a28b1">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <entry name="serial">53e39297-e2d7-48cf-9623-7be3b0d6b2f3</entry>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <entry name="uuid">53e39297-e2d7-48cf-9623-7be3b0d6b2f3</entry>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk.config">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:41:16:71"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <target dev="tap68c13a19-1a"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/console.log" append="off"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:23:01 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:23:01 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:23:01 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:23:01 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.075 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Preparing to wait for external event network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.075 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.075 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.076 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.076 239460 DEBUG nova.virt.libvirt.vif [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:22:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1473413740',display_name='tempest-VolumesExtendAttachedTest-instance-1473413740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1473413740',id=9,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK4mEJSERpPIQVK3sAMeu17EWkufBq6o1JwD5SzDGHiO4Z/qUv1iUlgJH7z4vsuw0x6/IEDJafzxQjRMypF22CDgXJIieljJTYVV7/tjKuefzCG79wHpMe/YIqW+S8UZ6A==',key_name='tempest-keypair-802154011',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39d4847e7fda4ce1b3f82fb1983ae222',ramdisk_id='',reservation_id='r-003wsn2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-736874132',owner_user_name='tempest-VolumesExtendAttachedTest-736874132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:22:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e94c4027707149bebaa91488b942641b',uuid=53e39297-e2d7-48cf-9623-7be3b0d6b2f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.077 239460 DEBUG nova.network.os_vif_util [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Converting VIF {"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.077 239460 DEBUG nova.network.os_vif_util [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:16:71,bridge_name='br-int',has_traffic_filtering=True,id=68c13a19-1abc-4771-a498-863d2d0a28b1,network=Network(d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68c13a19-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.078 239460 DEBUG os_vif [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:16:71,bridge_name='br-int',has_traffic_filtering=True,id=68c13a19-1abc-4771-a498-863d2d0a28b1,network=Network(d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68c13a19-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.078 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.079 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.079 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.082 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.083 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68c13a19-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.083 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68c13a19-1a, col_values=(('external_ids', {'iface-id': '68c13a19-1abc-4771-a498-863d2d0a28b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:16:71', 'vm-uuid': '53e39297-e2d7-48cf-9623-7be3b0d6b2f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.084 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:01 np0005601226 NetworkManager[49020]: <info>  [1769707381.0855] manager: (tap68c13a19-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.087 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.089 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.090 239460 INFO os_vif [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:16:71,bridge_name='br-int',has_traffic_filtering=True,id=68c13a19-1abc-4771-a498-863d2d0a28b1,network=Network(d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68c13a19-1a')#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.213 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.215 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.216 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] No VIF found with MAC fa:16:3e:41:16:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.216 239460 INFO nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Using config drive#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.236 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 163 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.4 MiB/s wr, 58 op/s
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.834 239460 INFO nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Creating config drive at /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/disk.config#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.839 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xhg5zsm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:01 np0005601226 nova_compute[239456]: 2026-01-29 17:23:01.960 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9xhg5zsm" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.029 239460 DEBUG nova.storage.rbd_utils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] rbd image 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.032 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/disk.config 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.045 239460 DEBUG nova.network.neutron [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Updated VIF entry in instance network info cache for port 68c13a19-1abc-4771-a498-863d2d0a28b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.046 239460 DEBUG nova.network.neutron [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Updating instance_info_cache with network_info: [{"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.071 239460 DEBUG oslo_concurrency.lockutils [req-0dc0a484-563f-4fdf-b1ab-e16386bdb48e req-42606846-423c-4d12-b727-16f4f179130d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.810 239460 DEBUG oslo_concurrency.lockutils [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.811 239460 DEBUG oslo_concurrency.lockutils [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.827 239460 INFO nova.compute.manager [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Detaching volume 85b108e8-43ec-4be3-9edb-af488c14f2f7#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.910 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:02 np0005601226 nova_compute[239456]: 2026-01-29 17:23:02.998 239460 INFO nova.virt.block_device [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Attempting to driver detach volume 85b108e8-43ec-4be3-9edb-af488c14f2f7 from mountpoint /dev/vdb#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.006 239460 DEBUG nova.virt.libvirt.driver [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Attempting to detach device vdb from instance ec6929dc-4a2e-4a7f-9c40-413a310539c6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.006 239460 DEBUG nova.virt.libvirt.guest [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-85b108e8-43ec-4be3-9edb-af488c14f2f7">
Jan 29 12:23:03 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <serial>85b108e8-43ec-4be3-9edb-af488c14f2f7</serial>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:23:03 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.239 239460 INFO nova.virt.libvirt.driver [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully detached device vdb from instance ec6929dc-4a2e-4a7f-9c40-413a310539c6 from the persistent domain config.#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.240 239460 DEBUG nova.virt.libvirt.driver [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ec6929dc-4a2e-4a7f-9c40-413a310539c6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.240 239460 DEBUG nova.virt.libvirt.guest [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-85b108e8-43ec-4be3-9edb-af488c14f2f7">
Jan 29 12:23:03 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <serial>85b108e8-43ec-4be3-9edb-af488c14f2f7</serial>
Jan 29 12:23:03 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:23:03 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:23:03 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.611 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707383.6109383, ec6929dc-4a2e-4a7f-9c40-413a310539c6 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.613 239460 DEBUG nova.virt.libvirt.driver [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ec6929dc-4a2e-4a7f-9c40-413a310539c6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.615 239460 INFO nova.virt.libvirt.driver [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully detached device vdb from instance ec6929dc-4a2e-4a7f-9c40-413a310539c6 from the live domain config.#033[00m
Jan 29 12:23:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.890 239460 DEBUG oslo_concurrency.processutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/disk.config 53e39297-e2d7-48cf-9623-7be3b0d6b2f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.859s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.891 239460 INFO nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Deleting local config drive /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3/disk.config because it was imported into RBD.#033[00m
Jan 29 12:23:03 np0005601226 kernel: tap68c13a19-1a: entered promiscuous mode
Jan 29 12:23:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:03Z|00096|binding|INFO|Claiming lport 68c13a19-1abc-4771-a498-863d2d0a28b1 for this chassis.
Jan 29 12:23:03 np0005601226 NetworkManager[49020]: <info>  [1769707383.9273] manager: (tap68c13a19-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Jan 29 12:23:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:03Z|00097|binding|INFO|68c13a19-1abc-4771-a498-863d2d0a28b1: Claiming fa:16:3e:41:16:71 10.100.0.6
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.925 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:03Z|00098|binding|INFO|Setting lport 68c13a19-1abc-4771-a498-863d2d0a28b1 ovn-installed in OVS
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.935 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:03 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:03Z|00099|binding|INFO|Setting lport 68c13a19-1abc-4771-a498-863d2d0a28b1 up in Southbound
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.937 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:16:71 10.100.0.6'], port_security=['fa:16:3e:41:16:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '53e39297-e2d7-48cf-9623-7be3b0d6b2f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d4847e7fda4ce1b3f82fb1983ae222', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f1c7e2d0-096f-4267-9955-f5e2a5e57200', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2cc3333a-e4ca-4591-8e77-46aeb7e0328b, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=68c13a19-1abc-4771-a498-863d2d0a28b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:23:03 np0005601226 nova_compute[239456]: 2026-01-29 17:23:03.939 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.939 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 68c13a19-1abc-4771-a498-863d2d0a28b1 in datapath d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed bound to our chassis#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.941 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.950 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[319f5797-4801-4d8f-a912-65786aa46925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.950 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd83e49d6-71 in ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:23:03 np0005601226 systemd-udevd[256480]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.952 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd83e49d6-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.952 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3107fa1e-794d-408e-ac81-8a40316593ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.953 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[098126b4-27c8-402d-a14c-2b867758370a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:03 np0005601226 systemd-machined[207561]: New machine qemu-9-instance-00000009.
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.961 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[badb1010-a3da-4a80-9f57-dee4d38783ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:03 np0005601226 NetworkManager[49020]: <info>  [1769707383.9632] device (tap68c13a19-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:23:03 np0005601226 NetworkManager[49020]: <info>  [1769707383.9640] device (tap68c13a19-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:23:03 np0005601226 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.972 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa35190-173d-446b-b033-33dc9cc7a8cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.994 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[85c32306-d6d4-4fd4-8e28-8aef6d4221af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:03 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:03.998 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[35579ade-1fb5-4a6a-be01-6d065c0d6a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:03 np0005601226 NetworkManager[49020]: <info>  [1769707383.9995] manager: (tapd83e49d6-70): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Jan 29 12:23:03 np0005601226 systemd-udevd[256484]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.000 239460 DEBUG nova.objects.instance [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'flavor' on Instance uuid ec6929dc-4a2e-4a7f-9c40-413a310539c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.019 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2f8d1d-d154-463f-9c59-885e2f509505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.021 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[b553ceb5-6e76-488a-a6ef-e0d913bb5dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.031 239460 DEBUG oslo_concurrency.lockutils [None req-799b17c7-bf7a-4b53-8947-b619d32da8e5 d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:04 np0005601226 NetworkManager[49020]: <info>  [1769707384.0413] device (tapd83e49d6-70): carrier: link connected
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.045 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[56182d84-9b90-479c-85ec-8ee049f67d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.058 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[563e3260-db89-4af3-8935-6ce8c757e700]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd83e49d6-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:48:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473754, 'reachable_time': 36480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256513, 'error': None, 'target': 'ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.067 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[06d4d1c2-bb4b-4ff9-a9c8-10ad12894983]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:48b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473754, 'tstamp': 473754}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256514, 'error': None, 'target': 'ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.081 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6a434ee8-60b4-4f28-a48f-a181b6276710]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd83e49d6-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:48:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473754, 'reachable_time': 36480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256515, 'error': None, 'target': 'ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.104 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9c185667-f69c-476c-85fb-1c32adb93d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.146 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[05895a15-7ae3-4864-bbb0-9f7e5a34dab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.148 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd83e49d6-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.148 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.148 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd83e49d6-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.150 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:04 np0005601226 NetworkManager[49020]: <info>  [1769707384.1509] manager: (tapd83e49d6-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 29 12:23:04 np0005601226 kernel: tapd83e49d6-70: entered promiscuous mode
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.152 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.153 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd83e49d6-70, col_values=(('external_ids', {'iface-id': 'e5e51a19-78a0-418e-aee9-4f13a958b558'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.154 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:04Z|00100|binding|INFO|Releasing lport e5e51a19-78a0-418e-aee9-4f13a958b558 from this chassis (sb_readonly=0)
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.159 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.160 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.160 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9ef9b3c7-5062-4bed-b144-e3b506992123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.161 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed.pid.haproxy
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:23:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:04.162 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'env', 'PROCESS_TAG=haproxy-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:23:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 29 12:23:04 np0005601226 podman[256581]: 2026-01-29 17:23:04.481698752 +0000 UTC m=+0.020989520 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.683 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707384.6833766, 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.684 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] VM Started (Lifecycle Event)#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.707 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.710 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707384.6835709, 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.710 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.734 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.736 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:23:04 np0005601226 nova_compute[239456]: 2026-01-29 17:23:04.757 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:23:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 29 12:23:05 np0005601226 nova_compute[239456]: 2026-01-29 17:23:05.235 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:05 np0005601226 nova_compute[239456]: 2026-01-29 17:23:05.235 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:05 np0005601226 nova_compute[239456]: 2026-01-29 17:23:05.236 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:05 np0005601226 nova_compute[239456]: 2026-01-29 17:23:05.236 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:05 np0005601226 nova_compute[239456]: 2026-01-29 17:23:05.236 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:05 np0005601226 nova_compute[239456]: 2026-01-29 17:23:05.237 239460 INFO nova.compute.manager [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Terminating instance#033[00m
Jan 29 12:23:05 np0005601226 nova_compute[239456]: 2026-01-29 17:23:05.238 239460 DEBUG nova.compute.manager [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:23:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 29 12:23:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.7 MiB/s wr, 89 op/s
Jan 29 12:23:05 np0005601226 podman[256581]: 2026-01-29 17:23:05.754164824 +0000 UTC m=+1.293455562 container create 62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:23:05 np0005601226 systemd[1]: Started libpod-conmon-62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c.scope.
Jan 29 12:23:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1884ecd61cb8c2276768d945ca65e9a0b2b6bf285ea01cff9049a143e6d636/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:06 np0005601226 kernel: tapbf0c91eb-51 (unregistering): left promiscuous mode
Jan 29 12:23:06 np0005601226 NetworkManager[49020]: <info>  [1769707386.0279] device (tapbf0c91eb-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:23:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:06Z|00101|binding|INFO|Releasing lport bf0c91eb-51aa-4985-9952-a05bb97d14ab from this chassis (sb_readonly=0)
Jan 29 12:23:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:06Z|00102|binding|INFO|Setting lport bf0c91eb-51aa-4985-9952-a05bb97d14ab down in Southbound
Jan 29 12:23:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:06Z|00103|binding|INFO|Removing iface tapbf0c91eb-51 ovn-installed in OVS
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.032 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:06.037 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:2f:be 10.100.0.7'], port_security=['fa:16:3e:0f:2f:be 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ec6929dc-4a2e-4a7f-9c40-413a310539c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815af3cf993b45cc8f2cdf73bf1d552c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc045e1c-80f9-47d6-8732-e3ba625b91d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ddf8c3b-2084-4923-8e76-31ca07b64cbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=bf0c91eb-51aa-4985-9952-a05bb97d14ab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.043 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:06 np0005601226 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 29 12:23:06 np0005601226 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 13.911s CPU time.
Jan 29 12:23:06 np0005601226 systemd-machined[207561]: Machine qemu-8-instance-00000008 terminated.
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.085 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:06 np0005601226 podman[256581]: 2026-01-29 17:23:06.090316621 +0000 UTC m=+1.629607369 container init 62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:23:06 np0005601226 podman[256581]: 2026-01-29 17:23:06.095622445 +0000 UTC m=+1.634913173 container start 62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed[256602]: [NOTICE]   (256611) : New worker (256613) forked
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed[256602]: [NOTICE]   (256611) : Loading success.
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.273 239460 INFO nova.virt.libvirt.driver [-] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Instance destroyed successfully.#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.274 239460 DEBUG nova.objects.instance [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lazy-loading 'resources' on Instance uuid ec6929dc-4a2e-4a7f-9c40-413a310539c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.289 239460 DEBUG nova.virt.libvirt.vif [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:22:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1982204810',display_name='tempest-VolumesBackupsTest-instance-1982204810',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1982204810',id=8,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFig0RQ/1n7VfpfiFNE7GVFgBviP+LoGnNX+IrMAXhnF2bhHyaJz7sbYMOMXONeJP7S+Y3ZggjQCfeRI5OI3KuMILvXKYWprzYr93gmRI1/mhd/h9dDbo0WiH0640Qup6w==',key_name='tempest-keypair-528150633',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:22:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='815af3cf993b45cc8f2cdf73bf1d552c',ramdisk_id='',reservation_id='r-xbbmdpvu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-2142983406',owner_user_name='tempest-VolumesBackupsTest-2142983406-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:22:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d3463a84af564b968e67b687bc895548',uuid=ec6929dc-4a2e-4a7f-9c40-413a310539c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.290 239460 DEBUG nova.network.os_vif_util [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converting VIF {"id": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "address": "fa:16:3e:0f:2f:be", "network": {"id": "765ab7c4-f6eb-4a45-8c1b-00dc61ad3441", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-718755595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815af3cf993b45cc8f2cdf73bf1d552c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf0c91eb-51", "ovs_interfaceid": "bf0c91eb-51aa-4985-9952-a05bb97d14ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.290 239460 DEBUG nova.network.os_vif_util [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:2f:be,bridge_name='br-int',has_traffic_filtering=True,id=bf0c91eb-51aa-4985-9952-a05bb97d14ab,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0c91eb-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.291 239460 DEBUG os_vif [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:2f:be,bridge_name='br-int',has_traffic_filtering=True,id=bf0c91eb-51aa-4985-9952-a05bb97d14ab,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0c91eb-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.292 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.292 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf0c91eb-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.294 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.295 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.296 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:06 np0005601226 nova_compute[239456]: 2026-01-29 17:23:06.298 239460 INFO os_vif [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:2f:be,bridge_name='br-int',has_traffic_filtering=True,id=bf0c91eb-51aa-4985-9952-a05bb97d14ab,network=Network(765ab7c4-f6eb-4a45-8c1b-00dc61ad3441),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf0c91eb-51')#033[00m
Jan 29 12:23:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:06.340 155625 INFO neutron.agent.ovn.metadata.agent [-] Port bf0c91eb-51aa-4985-9952-a05bb97d14ab in datapath 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 unbound from our chassis#033[00m
Jan 29 12:23:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:06.342 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:23:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:06.343 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8d41ea-47f4-4c0e-a928-48f66a9e4e90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:06.344 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 namespace which is not needed anymore#033[00m
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [NOTICE]   (255872) : haproxy version is 2.8.14-c23fe91
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [NOTICE]   (255872) : path to executable is /usr/sbin/haproxy
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [WARNING]  (255872) : Exiting Master process...
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [WARNING]  (255872) : Exiting Master process...
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [ALERT]    (255872) : Current worker (255874) exited with code 143 (Terminated)
Jan 29 12:23:06 np0005601226 neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441[255868]: [WARNING]  (255872) : All workers exited. Exiting... (0)
Jan 29 12:23:06 np0005601226 systemd[1]: libpod-a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114.scope: Deactivated successfully.
Jan 29 12:23:06 np0005601226 podman[256669]: 2026-01-29 17:23:06.708897419 +0000 UTC m=+0.291619440 container died a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:23:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114-userdata-shm.mount: Deactivated successfully.
Jan 29 12:23:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a2f2106ffd3e8d1e83738348846eb8c9e7a753af315209680f2d4e80cc748a42-merged.mount: Deactivated successfully.
Jan 29 12:23:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 167 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 527 KiB/s wr, 36 op/s
Jan 29 12:23:07 np0005601226 nova_compute[239456]: 2026-01-29 17:23:07.931 239460 DEBUG nova.compute.manager [req-d7014576-0a04-4ee9-8882-2256c8e0e14c req-58cbfd74-9e3c-42fc-a256-738f439f033b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-vif-unplugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:07 np0005601226 nova_compute[239456]: 2026-01-29 17:23:07.931 239460 DEBUG oslo_concurrency.lockutils [req-d7014576-0a04-4ee9-8882-2256c8e0e14c req-58cbfd74-9e3c-42fc-a256-738f439f033b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:07 np0005601226 nova_compute[239456]: 2026-01-29 17:23:07.932 239460 DEBUG oslo_concurrency.lockutils [req-d7014576-0a04-4ee9-8882-2256c8e0e14c req-58cbfd74-9e3c-42fc-a256-738f439f033b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:07 np0005601226 nova_compute[239456]: 2026-01-29 17:23:07.932 239460 DEBUG oslo_concurrency.lockutils [req-d7014576-0a04-4ee9-8882-2256c8e0e14c req-58cbfd74-9e3c-42fc-a256-738f439f033b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:07 np0005601226 nova_compute[239456]: 2026-01-29 17:23:07.932 239460 DEBUG nova.compute.manager [req-d7014576-0a04-4ee9-8882-2256c8e0e14c req-58cbfd74-9e3c-42fc-a256-738f439f033b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] No waiting events found dispatching network-vif-unplugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:23:07 np0005601226 nova_compute[239456]: 2026-01-29 17:23:07.933 239460 DEBUG nova.compute.manager [req-d7014576-0a04-4ee9-8882-2256c8e0e14c req-58cbfd74-9e3c-42fc-a256-738f439f033b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-vif-unplugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:23:08 np0005601226 podman[256669]: 2026-01-29 17:23:08.187679527 +0000 UTC m=+1.770401548 container cleanup a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:23:08 np0005601226 systemd[1]: libpod-conmon-a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114.scope: Deactivated successfully.
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.819 239460 DEBUG nova.compute.manager [req-c40aa8ee-76cf-4a26-8795-a5259dbbefc1 req-ff51dba9-3f42-4633-ac1d-149ae61387c2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.819 239460 DEBUG oslo_concurrency.lockutils [req-c40aa8ee-76cf-4a26-8795-a5259dbbefc1 req-ff51dba9-3f42-4633-ac1d-149ae61387c2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.819 239460 DEBUG oslo_concurrency.lockutils [req-c40aa8ee-76cf-4a26-8795-a5259dbbefc1 req-ff51dba9-3f42-4633-ac1d-149ae61387c2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.819 239460 DEBUG oslo_concurrency.lockutils [req-c40aa8ee-76cf-4a26-8795-a5259dbbefc1 req-ff51dba9-3f42-4633-ac1d-149ae61387c2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.819 239460 DEBUG nova.compute.manager [req-c40aa8ee-76cf-4a26-8795-a5259dbbefc1 req-ff51dba9-3f42-4633-ac1d-149ae61387c2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Processing event network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.820 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.823 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707388.8233054, 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.823 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.825 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.827 239460 INFO nova.virt.libvirt.driver [-] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Instance spawned successfully.#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.827 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.843 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.849 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.852 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.853 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.853 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.854 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.854 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.854 239460 DEBUG nova.virt.libvirt.driver [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.886 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.919 239460 INFO nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Took 12.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.919 239460 DEBUG nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.948 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.982 239460 INFO nova.compute.manager [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Took 13.79 seconds to build instance.#033[00m
Jan 29 12:23:08 np0005601226 nova_compute[239456]: 2026-01-29 17:23:08.999 239460 DEBUG oslo_concurrency.lockutils [None req-84fe1373-9a7d-4006-bee5-b46a63abb29f e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 167 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 428 KiB/s wr, 47 op/s
Jan 29 12:23:09 np0005601226 podman[256700]: 2026-01-29 17:23:09.907265466 +0000 UTC m=+1.704539011 container remove a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.912 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9d98d4a2-3251-49e4-97de-d02044cdba81]: (4, ('Thu Jan 29 05:23:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 (a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114)\na133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114\nThu Jan 29 05:23:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 (a133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114)\na133d327b21b6f68a45b27b2e35d1159fb3aaa908a1883e56b39aede09a59114\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.913 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e70882-cef0-4029-9a31-cfeb01fa4a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.914 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap765ab7c4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:09 np0005601226 kernel: tap765ab7c4-f0: left promiscuous mode
Jan 29 12:23:09 np0005601226 nova_compute[239456]: 2026-01-29 17:23:09.915 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:09 np0005601226 nova_compute[239456]: 2026-01-29 17:23:09.924 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:09 np0005601226 nova_compute[239456]: 2026-01-29 17:23:09.925 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.926 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b627d2b7-6e28-433c-8c24-f8df2fbab71b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.940 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbe54ff-7b42-4b11-9930-0aef75e560f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.941 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5898b270-d019-4b7a-a9e7-e3987d0e0c3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.953 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee434cf-44e5-4a1d-9c6e-b4a7f0637425]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468636, 'reachable_time': 24616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256712, 'error': None, 'target': 'ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.956 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-765ab7c4-f6eb-4a45-8c1b-00dc61ad3441 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:23:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:09.956 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb3f8da-c492-4062-a24e-de1e8bf36db9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:09 np0005601226 systemd[1]: run-netns-ovnmeta\x2d765ab7c4\x2df6eb\x2d4a45\x2d8c1b\x2d00dc61ad3441.mount: Deactivated successfully.
Jan 29 12:23:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.176 239460 DEBUG nova.compute.manager [req-6892a8a0-4848-41e5-bc7e-eaca9a62bac8 req-29671ef6-5ef8-4325-ae5d-0fd8fae37e6e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.176 239460 DEBUG oslo_concurrency.lockutils [req-6892a8a0-4848-41e5-bc7e-eaca9a62bac8 req-29671ef6-5ef8-4325-ae5d-0fd8fae37e6e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.177 239460 DEBUG oslo_concurrency.lockutils [req-6892a8a0-4848-41e5-bc7e-eaca9a62bac8 req-29671ef6-5ef8-4325-ae5d-0fd8fae37e6e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.177 239460 DEBUG oslo_concurrency.lockutils [req-6892a8a0-4848-41e5-bc7e-eaca9a62bac8 req-29671ef6-5ef8-4325-ae5d-0fd8fae37e6e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.177 239460 DEBUG nova.compute.manager [req-6892a8a0-4848-41e5-bc7e-eaca9a62bac8 req-29671ef6-5ef8-4325-ae5d-0fd8fae37e6e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] No waiting events found dispatching network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.177 239460 WARNING nova.compute.manager [req-6892a8a0-4848-41e5-bc7e-eaca9a62bac8 req-29671ef6-5ef8-4325-ae5d-0fd8fae37e6e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received unexpected event network-vif-plugged-bf0c91eb-51aa-4985-9952-a05bb97d14ab for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:23:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:23:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:23:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:23:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:23:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:23:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:23:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 29 12:23:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.900 239460 DEBUG nova.compute.manager [req-93d8e161-6e19-422f-b57d-6065fdc97ef4 req-4f98fb36-53b0-495a-a60a-1aed4dddcce1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.900 239460 DEBUG oslo_concurrency.lockutils [req-93d8e161-6e19-422f-b57d-6065fdc97ef4 req-4f98fb36-53b0-495a-a60a-1aed4dddcce1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.901 239460 DEBUG oslo_concurrency.lockutils [req-93d8e161-6e19-422f-b57d-6065fdc97ef4 req-4f98fb36-53b0-495a-a60a-1aed4dddcce1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.901 239460 DEBUG oslo_concurrency.lockutils [req-93d8e161-6e19-422f-b57d-6065fdc97ef4 req-4f98fb36-53b0-495a-a60a-1aed4dddcce1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.901 239460 DEBUG nova.compute.manager [req-93d8e161-6e19-422f-b57d-6065fdc97ef4 req-4f98fb36-53b0-495a-a60a-1aed4dddcce1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] No waiting events found dispatching network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:23:10 np0005601226 nova_compute[239456]: 2026-01-29 17:23:10.902 239460 WARNING nova.compute.manager [req-93d8e161-6e19-422f-b57d-6065fdc97ef4 req-4f98fb36-53b0-495a-a60a-1aed4dddcce1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received unexpected event network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:23:11 np0005601226 nova_compute[239456]: 2026-01-29 17:23:11.079 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:11 np0005601226 nova_compute[239456]: 2026-01-29 17:23:11.294 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 167 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 23 KiB/s wr, 50 op/s
Jan 29 12:23:12 np0005601226 podman[256811]: 2026-01-29 17:23:12.780070193 +0000 UTC m=+0.889451895 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 12:23:13 np0005601226 podman[256831]: 2026-01-29 17:23:13.441367628 +0000 UTC m=+0.482561268 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:23:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 146 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 803 KiB/s rd, 242 B/s wr, 49 op/s
Jan 29 12:23:13 np0005601226 podman[256811]: 2026-01-29 17:23:13.782172662 +0000 UTC m=+1.891554354 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 29 12:23:14 np0005601226 podman[256846]: 2026-01-29 17:23:14.224396456 +0000 UTC m=+0.407220876 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:23:14 np0005601226 podman[256845]: 2026-01-29 17:23:14.232368302 +0000 UTC m=+0.414494843 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:23:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:23:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 716 B/s wr, 107 op/s
Jan 29 12:23:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:23:16 np0005601226 nova_compute[239456]: 2026-01-29 17:23:16.082 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:16 np0005601226 nova_compute[239456]: 2026-01-29 17:23:16.295 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:23:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:23:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:23:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:23:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:23:16 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:23:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:23:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:23:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:23:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:23:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:23:17 np0005601226 podman[257184]: 2026-01-29 17:23:17.400321454 +0000 UTC m=+0.017185847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:23:17 np0005601226 podman[257184]: 2026-01-29 17:23:17.513063881 +0000 UTC m=+0.129928294 container create 89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:23:17 np0005601226 nova_compute[239456]: 2026-01-29 17:23:17.529 239460 INFO nova.virt.libvirt.driver [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Deleting instance files /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6_del#033[00m
Jan 29 12:23:17 np0005601226 nova_compute[239456]: 2026-01-29 17:23:17.529 239460 INFO nova.virt.libvirt.driver [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Deletion of /var/lib/nova/instances/ec6929dc-4a2e-4a7f-9c40-413a310539c6_del complete#033[00m
Jan 29 12:23:17 np0005601226 systemd[1]: Started libpod-conmon-89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974.scope.
Jan 29 12:23:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 716 B/s wr, 107 op/s
Jan 29 12:23:17 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:18 np0005601226 podman[257184]: 2026-01-29 17:23:18.002145777 +0000 UTC m=+0.619010170 container init 89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:23:18 np0005601226 podman[257184]: 2026-01-29 17:23:18.007652856 +0000 UTC m=+0.624517229 container start 89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dhawan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:23:18 np0005601226 sharp_dhawan[257200]: 167 167
Jan 29 12:23:18 np0005601226 systemd[1]: libpod-89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974.scope: Deactivated successfully.
Jan 29 12:23:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:23:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:23:18 np0005601226 podman[257184]: 2026-01-29 17:23:18.685520941 +0000 UTC m=+1.302385334 container attach 89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:23:18 np0005601226 podman[257184]: 2026-01-29 17:23:18.686006504 +0000 UTC m=+1.302870877 container died 89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dhawan, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.337 239460 DEBUG nova.compute.manager [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-changed-68c13a19-1abc-4771-a498-863d2d0a28b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.337 239460 DEBUG nova.compute.manager [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Refreshing instance network info cache due to event network-changed-68c13a19-1abc-4771-a498-863d2d0a28b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.337 239460 DEBUG oslo_concurrency.lockutils [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.338 239460 DEBUG oslo_concurrency.lockutils [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.338 239460 DEBUG nova.network.neutron [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Refreshing network info cache for port 68c13a19-1abc-4771-a498-863d2d0a28b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.451 239460 INFO nova.compute.manager [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Took 14.21 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.452 239460 DEBUG oslo.service.loopingcall [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.452 239460 DEBUG nova.compute.manager [-] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:23:19 np0005601226 nova_compute[239456]: 2026-01-29 17:23:19.452 239460 DEBUG nova.network.neutron [-] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:23:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 KiB/s wr, 103 op/s
Jan 29 12:23:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-314322cb51a0a26fb0156915ce42e44b493bc80514187d18363e0d94fb9f5fb8-merged.mount: Deactivated successfully.
Jan 29 12:23:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:20 np0005601226 nova_compute[239456]: 2026-01-29 17:23:20.585 239460 DEBUG nova.network.neutron [-] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:23:20 np0005601226 nova_compute[239456]: 2026-01-29 17:23:20.611 239460 INFO nova.compute.manager [-] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Took 1.16 seconds to deallocate network for instance.#033[00m
Jan 29 12:23:20 np0005601226 nova_compute[239456]: 2026-01-29 17:23:20.668 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:20 np0005601226 nova_compute[239456]: 2026-01-29 17:23:20.669 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:20 np0005601226 nova_compute[239456]: 2026-01-29 17:23:20.699 239460 DEBUG nova.compute.manager [req-91296910-c635-4ae0-93ba-9cc4a9943e9b req-7ae58bc9-a3f3-4cdc-92f7-2c90bb02194b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Received event network-vif-deleted-bf0c91eb-51aa-4985-9952-a05bb97d14ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:20 np0005601226 nova_compute[239456]: 2026-01-29 17:23:20.759 239460 DEBUG oslo_concurrency.processutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.083 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.273 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707386.2724385, ec6929dc-4a2e-4a7f-9c40-413a310539c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.274 239460 INFO nova.compute.manager [-] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.297 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.306 239460 DEBUG nova.compute.manager [None req-bee8abb9-521b-4f8f-951a-5ae22fe4ced9 - - - - - -] [instance: ec6929dc-4a2e-4a7f-9c40-413a310539c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:23:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705958209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.565 239460 DEBUG oslo_concurrency.processutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.571 239460 DEBUG nova.compute.provider_tree [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.592 239460 DEBUG nova.scheduler.client.report [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.607 239460 DEBUG nova.network.neutron [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Updated VIF entry in instance network info cache for port 68c13a19-1abc-4771-a498-863d2d0a28b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.607 239460 DEBUG nova.network.neutron [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Updating instance_info_cache with network_info: [{"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.616 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.638 239460 DEBUG oslo_concurrency.lockutils [req-a1ed3ceb-3607-4038-ba52-0514cf99ea2e req-b444893e-8967-45f4-9c43-b1ab7e58c57d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-53e39297-e2d7-48cf-9623-7be3b0d6b2f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.644 239460 INFO nova.scheduler.client.report [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Deleted allocations for instance ec6929dc-4a2e-4a7f-9c40-413a310539c6#033[00m
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.713 239460 DEBUG oslo_concurrency.lockutils [None req-dd1e2dbc-29a9-410b-92ad-0edd2bef1f2e d3463a84af564b968e67b687bc895548 815af3cf993b45cc8f2cdf73bf1d552c - - default default] Lock "ec6929dc-4a2e-4a7f-9c40-413a310539c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 KiB/s wr, 95 op/s
Jan 29 12:23:21 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 29 12:23:21 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:21.833536) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:23:21 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 29 12:23:21 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707401833637, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1790, "num_deletes": 267, "total_data_size": 2595368, "memory_usage": 2634368, "flush_reason": "Manual Compaction"}
Jan 29 12:23:21 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 29 12:23:21 np0005601226 podman[257184]: 2026-01-29 17:23:21.836242136 +0000 UTC m=+4.453106529 container remove 89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dhawan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 29 12:23:21 np0005601226 nova_compute[239456]: 2026-01-29 17:23:21.867 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:21 np0005601226 systemd[1]: libpod-conmon-89e79fc0d2bfc544e3a7b32532747009a6347a97213fdebaac19340c9acbd974.scope: Deactivated successfully.
Jan 29 12:23:22 np0005601226 podman[257246]: 2026-01-29 17:23:21.953123727 +0000 UTC m=+0.017767443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707402054345, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 2543271, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23754, "largest_seqno": 25543, "table_properties": {"data_size": 2534588, "index_size": 5369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18200, "raw_average_key_size": 20, "raw_value_size": 2517102, "raw_average_value_size": 2853, "num_data_blocks": 236, "num_entries": 882, "num_filter_entries": 882, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707279, "oldest_key_time": 1769707279, "file_creation_time": 1769707401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 220851 microseconds, and 5806 cpu microseconds.
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.054394) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 2543271 bytes OK
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.054413) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.146989) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.147043) EVENT_LOG_v1 {"time_micros": 1769707402147034, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.147070) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2587311, prev total WAL file size 2616051, number of live WAL files 2.
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.147789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(2483KB)], [53(9584KB)]
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707402147829, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 12357783, "oldest_snapshot_seqno": -1}
Jan 29 12:23:22 np0005601226 podman[257246]: 2026-01-29 17:23:22.505543249 +0000 UTC m=+0.570186935 container create b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5481 keys, 12260640 bytes, temperature: kUnknown
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707402553336, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12260640, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12216457, "index_size": 29353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 135490, "raw_average_key_size": 24, "raw_value_size": 12110384, "raw_average_value_size": 2209, "num_data_blocks": 1219, "num_entries": 5481, "num_filter_entries": 5481, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.553612) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12260640 bytes
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.576284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.5 rd, 30.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.4 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 6023, records dropped: 542 output_compression: NoCompression
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.576318) EVENT_LOG_v1 {"time_micros": 1769707402576303, "job": 28, "event": "compaction_finished", "compaction_time_micros": 405638, "compaction_time_cpu_micros": 19463, "output_level": 6, "num_output_files": 1, "total_output_size": 12260640, "num_input_records": 6023, "num_output_records": 5481, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707402576605, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707402577222, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.147716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.577333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.577341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.577345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.577348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:23:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:23:22.577351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:23:22 np0005601226 systemd[1]: Started libpod-conmon-b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912.scope.
Jan 29 12:23:22 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/759ed0740e24bd49256c66e9f643ec797b8e7a6275799683c53733ee0999a0d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/759ed0740e24bd49256c66e9f643ec797b8e7a6275799683c53733ee0999a0d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/759ed0740e24bd49256c66e9f643ec797b8e7a6275799683c53733ee0999a0d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/759ed0740e24bd49256c66e9f643ec797b8e7a6275799683c53733ee0999a0d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:22 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/759ed0740e24bd49256c66e9f643ec797b8e7a6275799683c53733ee0999a0d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:22 np0005601226 podman[257246]: 2026-01-29 17:23:22.845625283 +0000 UTC m=+0.910268979 container init b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 12:23:22 np0005601226 podman[257246]: 2026-01-29 17:23:22.851702887 +0000 UTC m=+0.916346583 container start b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 29 12:23:23 np0005601226 podman[257246]: 2026-01-29 17:23:23.014317119 +0000 UTC m=+1.078960835 container attach b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:23:23 np0005601226 dazzling_wu[257262]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:23:23 np0005601226 dazzling_wu[257262]: --> All data devices are unavailable
Jan 29 12:23:23 np0005601226 systemd[1]: libpod-b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912.scope: Deactivated successfully.
Jan 29 12:23:23 np0005601226 podman[257246]: 2026-01-29 17:23:23.264911664 +0000 UTC m=+1.329555360 container died b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 12:23:23 np0005601226 nova_compute[239456]: 2026-01-29 17:23:23.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 89 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 343 KiB/s wr, 90 op/s
Jan 29 12:23:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-759ed0740e24bd49256c66e9f643ec797b8e7a6275799683c53733ee0999a0d9-merged.mount: Deactivated successfully.
Jan 29 12:23:24 np0005601226 podman[257246]: 2026-01-29 17:23:24.784741316 +0000 UTC m=+2.849385012 container remove b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:23:24 np0005601226 systemd[1]: libpod-conmon-b1181b2d0d2a83a7967b4bd2afd95d52e1a9fdd0de4a7b73c86e245721f12912.scope: Deactivated successfully.
Jan 29 12:23:25 np0005601226 podman[257360]: 2026-01-29 17:23:25.14843762 +0000 UTC m=+0.022107671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:23:25 np0005601226 podman[257360]: 2026-01-29 17:23:25.285136587 +0000 UTC m=+0.158806628 container create 69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_meninsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:23:25 np0005601226 systemd[1]: Started libpod-conmon-69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b.scope.
Jan 29 12:23:25 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:25 np0005601226 podman[257360]: 2026-01-29 17:23:25.593905162 +0000 UTC m=+0.467575193 container init 69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_meninsky, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:23:25 np0005601226 podman[257360]: 2026-01-29 17:23:25.599488693 +0000 UTC m=+0.473158694 container start 69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 12:23:25 np0005601226 objective_meninsky[257377]: 167 167
Jan 29 12:23:25 np0005601226 systemd[1]: libpod-69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b.scope: Deactivated successfully.
Jan 29 12:23:25 np0005601226 podman[257360]: 2026-01-29 17:23:25.66133553 +0000 UTC m=+0.535005561 container attach 69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:23:25 np0005601226 podman[257360]: 2026-01-29 17:23:25.66168746 +0000 UTC m=+0.535357461 container died 69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:23:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 101 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 83 op/s
Jan 29 12:23:26 np0005601226 nova_compute[239456]: 2026-01-29 17:23:26.084 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 29 12:23:26 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7a6d8816a0d033f1c40fab4383972923cc38091043a02e1cbb23ff0455325868-merged.mount: Deactivated successfully.
Jan 29 12:23:26 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:26Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:16:71 10.100.0.6
Jan 29 12:23:26 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:26Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:16:71 10.100.0.6
Jan 29 12:23:26 np0005601226 nova_compute[239456]: 2026-01-29 17:23:26.299 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 29 12:23:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 29 12:23:26 np0005601226 podman[257360]: 2026-01-29 17:23:26.909576987 +0000 UTC m=+1.783247018 container remove 69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:23:26 np0005601226 systemd[1]: libpod-conmon-69b3a91f3b53894fa4c689adf1c0b77077bb9de58531617248345c4719ad497b.scope: Deactivated successfully.
Jan 29 12:23:27 np0005601226 podman[257402]: 2026-01-29 17:23:27.093550526 +0000 UTC m=+0.074568263 container create ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:23:27 np0005601226 podman[257402]: 2026-01-29 17:23:27.037379813 +0000 UTC m=+0.018397570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:23:27 np0005601226 systemd[1]: Started libpod-conmon-ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b.scope.
Jan 29 12:23:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056f9e7089497a3bf17a4df9ca64bbaf17cc7a2fa3f944fb64f3443d8d07b412/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056f9e7089497a3bf17a4df9ca64bbaf17cc7a2fa3f944fb64f3443d8d07b412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056f9e7089497a3bf17a4df9ca64bbaf17cc7a2fa3f944fb64f3443d8d07b412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/056f9e7089497a3bf17a4df9ca64bbaf17cc7a2fa3f944fb64f3443d8d07b412/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:27 np0005601226 podman[257402]: 2026-01-29 17:23:27.291496704 +0000 UTC m=+0.272514461 container init ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 12:23:27 np0005601226 podman[257402]: 2026-01-29 17:23:27.299341017 +0000 UTC m=+0.280358754 container start ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.311 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.313 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.336 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:23:27 np0005601226 podman[257402]: 2026-01-29 17:23:27.411930751 +0000 UTC m=+0.392948488 container attach ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.514 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.515 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.522 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.522 239460 INFO nova.compute.claims [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]: {
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:    "0": [
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:        {
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "devices": [
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "/dev/loop3"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            ],
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_name": "ceph_lv0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_size": "21470642176",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "name": "ceph_lv0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "tags": {
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cluster_name": "ceph",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.crush_device_class": "",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.encrypted": "0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.objectstore": "bluestore",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osd_id": "0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.type": "block",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.vdo": "0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.with_tpm": "0"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            },
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "type": "block",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "vg_name": "ceph_vg0"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:        }
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:    ],
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:    "1": [
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:        {
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "devices": [
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "/dev/loop4"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            ],
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_name": "ceph_lv1",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_size": "21470642176",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "name": "ceph_lv1",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "tags": {
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cluster_name": "ceph",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.crush_device_class": "",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.encrypted": "0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.objectstore": "bluestore",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osd_id": "1",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.type": "block",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.vdo": "0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.with_tpm": "0"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            },
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "type": "block",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "vg_name": "ceph_vg1"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:        }
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:    ],
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:    "2": [
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:        {
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "devices": [
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "/dev/loop5"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            ],
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_name": "ceph_lv2",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_size": "21470642176",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "name": "ceph_lv2",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "tags": {
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.cluster_name": "ceph",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.crush_device_class": "",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.encrypted": "0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.objectstore": "bluestore",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osd_id": "2",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.type": "block",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.vdo": "0",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:                "ceph.with_tpm": "0"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            },
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "type": "block",
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:            "vg_name": "ceph_vg2"
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:        }
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]:    ]
Jan 29 12:23:27 np0005601226 jolly_stonebraker[257418]: }
Jan 29 12:23:27 np0005601226 systemd[1]: libpod-ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b.scope: Deactivated successfully.
Jan 29 12:23:27 np0005601226 podman[257402]: 2026-01-29 17:23:27.600610538 +0000 UTC m=+0.581628275 container died ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.625 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:27 np0005601226 nova_compute[239456]: 2026-01-29 17:23:27.712 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 101 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 29 12:23:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 29 12:23:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 29 12:23:28 np0005601226 systemd[1]: var-lib-containers-storage-overlay-056f9e7089497a3bf17a4df9ca64bbaf17cc7a2fa3f944fb64f3443d8d07b412-merged.mount: Deactivated successfully.
Jan 29 12:23:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 29 12:23:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:23:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315774691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.480 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.768s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.487 239460 DEBUG nova.compute.provider_tree [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.501 239460 DEBUG nova.scheduler.client.report [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.520 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.520 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.522 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.522 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.523 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.523 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.580 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.581 239460 DEBUG nova.network.neutron [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.598 239460 INFO nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.616 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.697 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.699 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.699 239460 INFO nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Creating image(s)#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.757 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.779 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.801 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.806 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.827 239460 DEBUG nova.policy [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '66a034221acf4c559a731fcc84a54c53', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f2a1daea29d845c4b1c58f0e6610e767', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.860 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.861 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.861 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.862 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.880 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:28 np0005601226 nova_compute[239456]: 2026-01-29 17:23:28.882 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:29 np0005601226 podman[257402]: 2026-01-29 17:23:29.064619896 +0000 UTC m=+2.045637633 container remove ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:23:29 np0005601226 systemd[1]: libpod-conmon-ddb67ae3bddf92ff260f7aaa225690ca261bab6a014529cdda6de1167f63dd9b.scope: Deactivated successfully.
Jan 29 12:23:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:23:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806682775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.174 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.244 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.244 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.361 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.362 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4377MB free_disk=59.94694356061518GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.363 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.363 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.424 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.424 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 54ae1aee-2aec-49fb-981c-904cceb59a9d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.425 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.425 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.472 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:29 np0005601226 podman[257639]: 2026-01-29 17:23:29.461619463 +0000 UTC m=+0.017503415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:23:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 114 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 428 KiB/s rd, 3.1 MiB/s wr, 90 op/s
Jan 29 12:23:29 np0005601226 podman[257639]: 2026-01-29 17:23:29.954735297 +0000 UTC m=+0.510619219 container create 2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:23:29 np0005601226 nova_compute[239456]: 2026-01-29 17:23:29.969 239460 DEBUG nova.network.neutron [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Successfully created port: dd0e38fb-6c55-46b2-944f-3b2cf8f87929 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:23:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 29 12:23:30 np0005601226 systemd[1]: Started libpod-conmon-2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5.scope.
Jan 29 12:23:30 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 29 12:23:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 29 12:23:30 np0005601226 podman[257639]: 2026-01-29 17:23:30.169474772 +0000 UTC m=+0.725358714 container init 2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:23:30 np0005601226 podman[257639]: 2026-01-29 17:23:30.176441011 +0000 UTC m=+0.732324923 container start 2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 12:23:30 np0005601226 strange_dewdney[257679]: 167 167
Jan 29 12:23:30 np0005601226 systemd[1]: libpod-2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5.scope: Deactivated successfully.
Jan 29 12:23:30 np0005601226 podman[257639]: 2026-01-29 17:23:30.332964916 +0000 UTC m=+0.888848858 container attach 2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:23:30 np0005601226 podman[257639]: 2026-01-29 17:23:30.335003041 +0000 UTC m=+0.890886963 container died 2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:23:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:23:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191287263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:23:30 np0005601226 nova_compute[239456]: 2026-01-29 17:23:30.474 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.002s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:30 np0005601226 nova_compute[239456]: 2026-01-29 17:23:30.480 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:23:30 np0005601226 nova_compute[239456]: 2026-01-29 17:23:30.496 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:23:30 np0005601226 nova_compute[239456]: 2026-01-29 17:23:30.514 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:23:30 np0005601226 nova_compute[239456]: 2026-01-29 17:23:30.515 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.087 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.139 239460 DEBUG nova.network.neutron [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Successfully updated port: dd0e38fb-6c55-46b2-944f-3b2cf8f87929 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.156 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.157 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquired lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.157 239460 DEBUG nova.network.neutron [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.236 239460 DEBUG nova.compute.manager [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-changed-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.236 239460 DEBUG nova.compute.manager [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Refreshing instance network info cache due to event network-changed-dd0e38fb-6c55-46b2-944f-3b2cf8f87929. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.237 239460 DEBUG oslo_concurrency.lockutils [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:23:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:23:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3667442246' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:23:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:23:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3667442246' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.300 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.310 239460 DEBUG nova.network.neutron [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.516 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.517 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.517 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.560 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.560 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.560 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.561 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.561 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:23:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 114 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 543 KiB/s rd, 1.1 MiB/s wr, 86 op/s
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.893 239460 DEBUG nova.network.neutron [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updating instance_info_cache with network_info: [{"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.912 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Releasing lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.913 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Instance network_info: |[{"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.913 239460 DEBUG oslo_concurrency.lockutils [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:23:31 np0005601226 nova_compute[239456]: 2026-01-29 17:23:31.914 239460 DEBUG nova.network.neutron [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Refreshing network info cache for port dd0e38fb-6c55-46b2-944f-3b2cf8f87929 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:23:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-15f7eb406f1347a4d6f33bcbbfd6e951d147b4beaf92dd2f1e73b8243cf48419-merged.mount: Deactivated successfully.
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.179 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:32 np0005601226 podman[257639]: 2026-01-29 17:23:32.23482638 +0000 UTC m=+2.790710302 container remove 2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.247 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] resizing rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:23:32 np0005601226 systemd[1]: libpod-conmon-2ee2ede44f5a6800b3a4fbe8c68a32709b36dc078bf9ca37e72536c5622d79f5.scope: Deactivated successfully.
Jan 29 12:23:32 np0005601226 podman[257758]: 2026-01-29 17:23:32.442627235 +0000 UTC m=+0.116557102 container create c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shamir, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:23:32 np0005601226 podman[257758]: 2026-01-29 17:23:32.349822929 +0000 UTC m=+0.023752816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:23:32 np0005601226 systemd[1]: Started libpod-conmon-c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1.scope.
Jan 29 12:23:32 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1993797eb2242dbdf0aabbcde0e975ff92864def5448b8e0f90a62628e00be0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1993797eb2242dbdf0aabbcde0e975ff92864def5448b8e0f90a62628e00be0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1993797eb2242dbdf0aabbcde0e975ff92864def5448b8e0f90a62628e00be0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:32 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1993797eb2242dbdf0aabbcde0e975ff92864def5448b8e0f90a62628e00be0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:32 np0005601226 podman[257758]: 2026-01-29 17:23:32.80263991 +0000 UTC m=+0.476569847 container init c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:23:32 np0005601226 podman[257758]: 2026-01-29 17:23:32.809652721 +0000 UTC m=+0.483582588 container start c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shamir, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.826 239460 DEBUG nova.objects.instance [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'migration_context' on Instance uuid 54ae1aee-2aec-49fb-981c-904cceb59a9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.842 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.843 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Ensure instance console log exists: /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.843 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.843 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.844 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.846 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Start _get_guest_xml network_info=[{"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.851 239460 WARNING nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.855 239460 DEBUG nova.virt.libvirt.host [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.856 239460 DEBUG nova.virt.libvirt.host [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.862 239460 DEBUG nova.virt.libvirt.host [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.863 239460 DEBUG nova.virt.libvirt.host [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.863 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.863 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.864 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.864 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.865 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.865 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.865 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.865 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.866 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.866 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.866 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.866 239460 DEBUG nova.virt.hardware [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.870 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.989 239460 DEBUG nova.network.neutron [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updated VIF entry in instance network info cache for port dd0e38fb-6c55-46b2-944f-3b2cf8f87929. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:23:32 np0005601226 nova_compute[239456]: 2026-01-29 17:23:32.992 239460 DEBUG nova.network.neutron [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updating instance_info_cache with network_info: [{"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:23:33 np0005601226 nova_compute[239456]: 2026-01-29 17:23:33.027 239460 DEBUG oslo_concurrency.lockutils [req-95e12c3b-e563-4ea2-8a7c-fb6773cac603 req-0d08abfa-1ae3-488a-ba45-07fb676e9610 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:23:33 np0005601226 podman[257758]: 2026-01-29 17:23:33.063796644 +0000 UTC m=+0.737726531 container attach c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shamir, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:23:33 np0005601226 lvm[257891]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:23:33 np0005601226 lvm[257890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:23:33 np0005601226 lvm[257890]: VG ceph_vg0 finished
Jan 29 12:23:33 np0005601226 lvm[257891]: VG ceph_vg1 finished
Jan 29 12:23:33 np0005601226 lvm[257893]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:23:33 np0005601226 lvm[257893]: VG ceph_vg2 finished
Jan 29 12:23:33 np0005601226 friendly_shamir[257774]: {}
Jan 29 12:23:33 np0005601226 podman[257758]: 2026-01-29 17:23:33.488575415 +0000 UTC m=+1.162505272 container died c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:23:33 np0005601226 systemd[1]: libpod-c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1.scope: Deactivated successfully.
Jan 29 12:23:33 np0005601226 systemd[1]: libpod-c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1.scope: Consumed 1.015s CPU time.
Jan 29 12:23:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:23:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2750580884' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:23:33 np0005601226 nova_compute[239456]: 2026-01-29 17:23:33.564 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.695s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:33 np0005601226 nova_compute[239456]: 2026-01-29 17:23:33.595 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:33 np0005601226 nova_compute[239456]: 2026-01-29 17:23:33.599 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 128 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 463 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Jan 29 12:23:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:23:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419064355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.099 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.100 239460 DEBUG nova.virt.libvirt.vif [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1377494089',display_name='tempest-TestStampPattern-server-1377494089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1377494089',id=10,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCntQFYGg1tN9Lkltvq06uP6PbTSdiUSw2rpV4DVMQfDXGCpCCbqNspsVT5fc2Gf5/3l4zc3WW9mGuuTy6awOxbpJd54hg8vvKJT9WsymmM3odJoG0L/624VsKwRCgcRrg==',key_name='tempest-TestStampPattern-1877159583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2a1daea29d845c4b1c58f0e6610e767',ramdisk_id='',reservation_id='r-l08m7xok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-907219493',owner_user_name='tempest-TestStampPattern-907219493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:23:28Z,user_data=None,user_id='66a034221acf4c559a731fcc84a54c53',uuid=54ae1aee-2aec-49fb-981c-904cceb59a9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.100 239460 DEBUG nova.network.os_vif_util [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converting VIF {"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.101 239460 DEBUG nova.network.os_vif_util [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:48:d2,bridge_name='br-int',has_traffic_filtering=True,id=dd0e38fb-6c55-46b2-944f-3b2cf8f87929,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd0e38fb-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.102 239460 DEBUG nova.objects.instance [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'pci_devices' on Instance uuid 54ae1aee-2aec-49fb-981c-904cceb59a9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.115 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <uuid>54ae1aee-2aec-49fb-981c-904cceb59a9d</uuid>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <name>instance-0000000a</name>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestStampPattern-server-1377494089</nova:name>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:23:32</nova:creationTime>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:user uuid="66a034221acf4c559a731fcc84a54c53">tempest-TestStampPattern-907219493-project-member</nova:user>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:project uuid="f2a1daea29d845c4b1c58f0e6610e767">tempest-TestStampPattern-907219493</nova:project>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <nova:port uuid="dd0e38fb-6c55-46b2-944f-3b2cf8f87929">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <entry name="serial">54ae1aee-2aec-49fb-981c-904cceb59a9d</entry>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <entry name="uuid">54ae1aee-2aec-49fb-981c-904cceb59a9d</entry>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/54ae1aee-2aec-49fb-981c-904cceb59a9d_disk">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/54ae1aee-2aec-49fb-981c-904cceb59a9d_disk.config">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:c3:48:d2"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <target dev="tapdd0e38fb-6c"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/console.log" append="off"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:23:34 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:23:34 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:23:34 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:23:34 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.116 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Preparing to wait for external event network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.116 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.116 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.116 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.117 239460 DEBUG nova.virt.libvirt.vif [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1377494089',display_name='tempest-TestStampPattern-server-1377494089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1377494089',id=10,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCntQFYGg1tN9Lkltvq06uP6PbTSdiUSw2rpV4DVMQfDXGCpCCbqNspsVT5fc2Gf5/3l4zc3WW9mGuuTy6awOxbpJd54hg8vvKJT9WsymmM3odJoG0L/624VsKwRCgcRrg==',key_name='tempest-TestStampPattern-1877159583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2a1daea29d845c4b1c58f0e6610e767',ramdisk_id='',reservation_id='r-l08m7xok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-907219493',owner_user_name='tempest-TestStampPattern-907219493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:23:28Z,user_data=None,user_id='66a034221acf4c559a731fcc84a54c53',uuid=54ae1aee-2aec-49fb-981c-904cceb59a9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.117 239460 DEBUG nova.network.os_vif_util [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converting VIF {"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.118 239460 DEBUG nova.network.os_vif_util [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:48:d2,bridge_name='br-int',has_traffic_filtering=True,id=dd0e38fb-6c55-46b2-944f-3b2cf8f87929,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd0e38fb-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.118 239460 DEBUG os_vif [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:48:d2,bridge_name='br-int',has_traffic_filtering=True,id=dd0e38fb-6c55-46b2-944f-3b2cf8f87929,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd0e38fb-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.118 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.119 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.119 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.121 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.121 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd0e38fb-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.122 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd0e38fb-6c, col_values=(('external_ids', {'iface-id': 'dd0e38fb-6c55-46b2-944f-3b2cf8f87929', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:48:d2', 'vm-uuid': '54ae1aee-2aec-49fb-981c-904cceb59a9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.123 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:34 np0005601226 NetworkManager[49020]: <info>  [1769707414.1244] manager: (tapdd0e38fb-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.125 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.128 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.129 239460 INFO os_vif [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:48:d2,bridge_name='br-int',has_traffic_filtering=True,id=dd0e38fb-6c55-46b2-944f-3b2cf8f87929,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd0e38fb-6c')#033[00m
Jan 29 12:23:34 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a1993797eb2242dbdf0aabbcde0e975ff92864def5448b8e0f90a62628e00be0-merged.mount: Deactivated successfully.
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.596 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.596 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.597 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No VIF found with MAC fa:16:3e:c3:48:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.597 239460 INFO nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Using config drive#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.666 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.671 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:23:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593864570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:23:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:23:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593864570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.967 239460 INFO nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Creating config drive at /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/disk.config#033[00m
Jan 29 12:23:34 np0005601226 nova_compute[239456]: 2026-01-29 17:23:34.970 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0zl38o_0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:35 np0005601226 nova_compute[239456]: 2026-01-29 17:23:35.088 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0zl38o_0" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:35 np0005601226 nova_compute[239456]: 2026-01-29 17:23:35.201 239460 DEBUG nova.storage.rbd_utils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:23:35 np0005601226 nova_compute[239456]: 2026-01-29 17:23:35.205 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/disk.config 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:35 np0005601226 podman[257758]: 2026-01-29 17:23:35.230652954 +0000 UTC m=+2.904582821 container remove c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:23:35 np0005601226 systemd[1]: libpod-conmon-c6d7289f904d5104f7734a84b237b36134f72809f8bef33af40ceda0bfdaa6c1.scope: Deactivated successfully.
Jan 29 12:23:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:23:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:23:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 511 KiB/s rd, 3.5 MiB/s wr, 148 op/s
Jan 29 12:23:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:36 np0005601226 nova_compute[239456]: 2026-01-29 17:23:36.089 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 29 12:23:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:23:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 29 12:23:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 29 12:23:36 np0005601226 nova_compute[239456]: 2026-01-29 17:23:36.900 239460 DEBUG oslo_concurrency.processutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/disk.config 54ae1aee-2aec-49fb-981c-904cceb59a9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.695s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:36 np0005601226 nova_compute[239456]: 2026-01-29 17:23:36.902 239460 INFO nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Deleting local config drive /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d/disk.config because it was imported into RBD.#033[00m
Jan 29 12:23:36 np0005601226 kernel: tapdd0e38fb-6c: entered promiscuous mode
Jan 29 12:23:36 np0005601226 NetworkManager[49020]: <info>  [1769707416.9513] manager: (tapdd0e38fb-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Jan 29 12:23:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:36Z|00104|binding|INFO|Claiming lport dd0e38fb-6c55-46b2-944f-3b2cf8f87929 for this chassis.
Jan 29 12:23:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:36Z|00105|binding|INFO|dd0e38fb-6c55-46b2-944f-3b2cf8f87929: Claiming fa:16:3e:c3:48:d2 10.100.0.12
Jan 29 12:23:36 np0005601226 nova_compute[239456]: 2026-01-29 17:23:36.951 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:36 np0005601226 nova_compute[239456]: 2026-01-29 17:23:36.958 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:36Z|00106|binding|INFO|Setting lport dd0e38fb-6c55-46b2-944f-3b2cf8f87929 ovn-installed in OVS
Jan 29 12:23:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:36Z|00107|binding|INFO|Setting lport dd0e38fb-6c55-46b2-944f-3b2cf8f87929 up in Southbound
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.958 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:48:d2 10.100.0.12'], port_security=['fa:16:3e:c3:48:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '54ae1aee-2aec-49fb-981c-904cceb59a9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2a1daea29d845c4b1c58f0e6610e767', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58fc09dd-a146-490e-a131-265322bed80e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627df87c-0fcf-4d89-b573-9b0d1cecf486, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=dd0e38fb-6c55-46b2-944f-3b2cf8f87929) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:23:36 np0005601226 nova_compute[239456]: 2026-01-29 17:23:36.960 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.961 155625 INFO neutron.agent.ovn.metadata.agent [-] Port dd0e38fb-6c55-46b2-944f-3b2cf8f87929 in datapath 3c884cc1-e1d2-418b-8bb8-bae78dab7018 bound to our chassis#033[00m
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.963 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c884cc1-e1d2-418b-8bb8-bae78dab7018#033[00m
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.976 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[890bdb55-213c-484a-bd81-09c55705e554]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.977 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c884cc1-e1 in ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.978 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c884cc1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.978 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f7250c51-1bc1-4ca0-9d2e-1e53bcc61fa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:36 np0005601226 systemd-machined[207561]: New machine qemu-10-instance-0000000a.
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.980 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2c1565-493d-4976-a4c1-c38874a3681a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:36 np0005601226 systemd-udevd[258051]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:23:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:36.989 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[5242e10b-cbcb-462a-b258-db44e524a141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:36 np0005601226 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 29 12:23:36 np0005601226 NetworkManager[49020]: <info>  [1769707416.9947] device (tapdd0e38fb-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:23:36 np0005601226 NetworkManager[49020]: <info>  [1769707416.9954] device (tapdd0e38fb-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.002 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f008b721-c3f8-4af0-8ad5-4b40f878f783]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.032 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4e2d6e-8780-40be-80b5-ffa761f5edea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.036 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8aa7a3-5719-4171-a355-8f86bb33678d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 NetworkManager[49020]: <info>  [1769707417.0378] manager: (tap3c884cc1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Jan 29 12:23:37 np0005601226 systemd-udevd[258054]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.066 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d1277a-d209-4775-9ba7-d9bdeddeacff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.069 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[76736c26-fc5d-4303-9f86-ebaa514f2d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 NetworkManager[49020]: <info>  [1769707417.0862] device (tap3c884cc1-e0): carrier: link connected
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.089 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab796e5-847f-4024-838e-55600737e654]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.104 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8cee57-ad7a-4849-afc8-c7e542b06b72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c884cc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:2e:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477058, 'reachable_time': 17483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258083, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.113 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7630379e-5e4a-403c-bef9-dc99df31184a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:2eb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477058, 'tstamp': 477058}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258084, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.125 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[36eed4ae-a534-487f-b880-b5890c4ee2b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c884cc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:2e:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477058, 'reachable_time': 17483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258085, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.140 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[23419a68-5445-4ed0-9207-434852ac6644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.170 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c2aaa0e0-229d-4848-bdbb-8a73601337a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.171 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c884cc1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.172 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.172 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c884cc1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.174 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:37 np0005601226 NetworkManager[49020]: <info>  [1769707417.1745] manager: (tap3c884cc1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 29 12:23:37 np0005601226 kernel: tap3c884cc1-e0: entered promiscuous mode
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.176 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.181 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c884cc1-e0, col_values=(('external_ids', {'iface-id': '0442c862-051a-4100-a371-ef7e19ea6eba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.212 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:37 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:37Z|00108|binding|INFO|Releasing lport 0442c862-051a-4100-a371-ef7e19ea6eba from this chassis (sb_readonly=0)
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.215 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c884cc1-e1d2-418b-8bb8-bae78dab7018.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c884cc1-e1d2-418b-8bb8-bae78dab7018.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.217 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[294e0d1d-15e8-4df5-9405-722c95198e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.218 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-3c884cc1-e1d2-418b-8bb8-bae78dab7018
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/3c884cc1-e1d2-418b-8bb8-bae78dab7018.pid.haproxy
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 3c884cc1-e1d2-418b-8bb8-bae78dab7018
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.219 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:37.221 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'env', 'PROCESS_TAG=haproxy-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c884cc1-e1d2-418b-8bb8-bae78dab7018.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.445 239460 DEBUG nova.compute.manager [req-fab437f8-76a6-47cb-aa2b-8d7cf4d3a277 req-1f1c67c3-3000-47a6-bd82-39ffff7ba541 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.445 239460 DEBUG oslo_concurrency.lockutils [req-fab437f8-76a6-47cb-aa2b-8d7cf4d3a277 req-1f1c67c3-3000-47a6-bd82-39ffff7ba541 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.446 239460 DEBUG oslo_concurrency.lockutils [req-fab437f8-76a6-47cb-aa2b-8d7cf4d3a277 req-1f1c67c3-3000-47a6-bd82-39ffff7ba541 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.446 239460 DEBUG oslo_concurrency.lockutils [req-fab437f8-76a6-47cb-aa2b-8d7cf4d3a277 req-1f1c67c3-3000-47a6-bd82-39ffff7ba541 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:37 np0005601226 nova_compute[239456]: 2026-01-29 17:23:37.446 239460 DEBUG nova.compute.manager [req-fab437f8-76a6-47cb-aa2b-8d7cf4d3a277 req-1f1c67c3-3000-47a6-bd82-39ffff7ba541 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Processing event network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:23:37 np0005601226 podman[258118]: 2026-01-29 17:23:37.549100615 +0000 UTC m=+0.031466404 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:23:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.101 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.102 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707418.1011229, 54ae1aee-2aec-49fb-981c-904cceb59a9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.102 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] VM Started (Lifecycle Event)#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.106 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.109 239460 INFO nova.virt.libvirt.driver [-] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Instance spawned successfully.#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.109 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.127 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.132 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.135 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.135 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.135 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.136 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.136 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.137 239460 DEBUG nova.virt.libvirt.driver [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.165 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.165 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707418.1012852, 54ae1aee-2aec-49fb-981c-904cceb59a9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.165 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.194 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.197 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707418.104522, 54ae1aee-2aec-49fb-981c-904cceb59a9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.198 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.204 239460 INFO nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Took 9.51 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.205 239460 DEBUG nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:38 np0005601226 podman[258118]: 2026-01-29 17:23:38.232223122 +0000 UTC m=+0.714588901 container create 97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.237 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.244 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.265 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.274 239460 INFO nova.compute.manager [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Took 10.90 seconds to build instance.#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.288 239460 DEBUG oslo_concurrency.lockutils [None req-6afdb108-2740-468c-8ca0-2201c40bd546 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:38 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:38Z|00109|binding|INFO|Releasing lport 0442c862-051a-4100-a371-ef7e19ea6eba from this chassis (sb_readonly=0)
Jan 29 12:23:38 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:38Z|00110|binding|INFO|Releasing lport e5e51a19-78a0-418e-aee9-4f13a958b558 from this chassis (sb_readonly=0)
Jan 29 12:23:38 np0005601226 systemd[1]: Started libpod-conmon-97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4.scope.
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.491 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:38 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:23:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1dfeb348e53290b247eb3503bebcb768db27b328c79781e9d5eabbffaae789/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:23:38 np0005601226 podman[258118]: 2026-01-29 17:23:38.611780977 +0000 UTC m=+1.094146776 container init 97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 29 12:23:38 np0005601226 podman[258118]: 2026-01-29 17:23:38.615847117 +0000 UTC m=+1.098212886 container start 97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:23:38 np0005601226 neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018[258174]: [NOTICE]   (258180) : New worker (258182) forked
Jan 29 12:23:38 np0005601226 neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018[258174]: [NOTICE]   (258180) : Loading success.
Jan 29 12:23:38 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:38.839 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:23:38 np0005601226 nova_compute[239456]: 2026-01-29 17:23:38.839 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:38 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:38.841 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:23:39 np0005601226 nova_compute[239456]: 2026-01-29 17:23:39.124 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:39 np0005601226 nova_compute[239456]: 2026-01-29 17:23:39.580 239460 DEBUG nova.compute.manager [req-796775f0-d16d-485e-b11d-2e5ee499699e req-01a308a3-9cca-4c95-a7e2-8f8c157c5ab5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:39 np0005601226 nova_compute[239456]: 2026-01-29 17:23:39.580 239460 DEBUG oslo_concurrency.lockutils [req-796775f0-d16d-485e-b11d-2e5ee499699e req-01a308a3-9cca-4c95-a7e2-8f8c157c5ab5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:39 np0005601226 nova_compute[239456]: 2026-01-29 17:23:39.581 239460 DEBUG oslo_concurrency.lockutils [req-796775f0-d16d-485e-b11d-2e5ee499699e req-01a308a3-9cca-4c95-a7e2-8f8c157c5ab5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:39 np0005601226 nova_compute[239456]: 2026-01-29 17:23:39.582 239460 DEBUG oslo_concurrency.lockutils [req-796775f0-d16d-485e-b11d-2e5ee499699e req-01a308a3-9cca-4c95-a7e2-8f8c157c5ab5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:39 np0005601226 nova_compute[239456]: 2026-01-29 17:23:39.582 239460 DEBUG nova.compute.manager [req-796775f0-d16d-485e-b11d-2e5ee499699e req-01a308a3-9cca-4c95-a7e2-8f8c157c5ab5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] No waiting events found dispatching network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:23:39 np0005601226 nova_compute[239456]: 2026-01-29 17:23:39.582 239460 WARNING nova.compute.manager [req-796775f0-d16d-485e-b11d-2e5ee499699e req-01a308a3-9cca-4c95-a7e2-8f8c157c5ab5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received unexpected event network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:23:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 759 KiB/s rd, 2.3 MiB/s wr, 106 op/s
Jan 29 12:23:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:40.285 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:40.285 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:40.286 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:23:40
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:23:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:23:41 np0005601226 nova_compute[239456]: 2026-01-29 17:23:41.093 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 29 12:23:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 29 12:23:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 29 12:23:41 np0005601226 nova_compute[239456]: 2026-01-29 17:23:41.465 239460 DEBUG nova.compute.manager [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-changed-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:41 np0005601226 nova_compute[239456]: 2026-01-29 17:23:41.466 239460 DEBUG nova.compute.manager [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Refreshing instance network info cache due to event network-changed-dd0e38fb-6c55-46b2-944f-3b2cf8f87929. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:23:41 np0005601226 nova_compute[239456]: 2026-01-29 17:23:41.466 239460 DEBUG oslo_concurrency.lockutils [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:23:41 np0005601226 nova_compute[239456]: 2026-01-29 17:23:41.466 239460 DEBUG oslo_concurrency.lockutils [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:23:41 np0005601226 nova_compute[239456]: 2026-01-29 17:23:41.466 239460 DEBUG nova.network.neutron [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Refreshing network info cache for port dd0e38fb-6c55-46b2-944f-3b2cf8f87929 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:23:41 np0005601226 nova_compute[239456]: 2026-01-29 17:23:41.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:23:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 897 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Jan 29 12:23:43 np0005601226 nova_compute[239456]: 2026-01-29 17:23:43.593 239460 DEBUG nova.network.neutron [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updated VIF entry in instance network info cache for port dd0e38fb-6c55-46b2-944f-3b2cf8f87929. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:23:43 np0005601226 nova_compute[239456]: 2026-01-29 17:23:43.594 239460 DEBUG nova.network.neutron [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updating instance_info_cache with network_info: [{"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:23:43 np0005601226 nova_compute[239456]: 2026-01-29 17:23:43.612 239460 DEBUG oslo_concurrency.lockutils [req-c7af5b99-a153-4f9b-9d16-f3052bd1c32f req-df89fd53-ca89-455a-b4d3-7e402cdac0d7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:23:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 38 KiB/s wr, 76 op/s
Jan 29 12:23:44 np0005601226 nova_compute[239456]: 2026-01-29 17:23:44.126 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:44 np0005601226 nova_compute[239456]: 2026-01-29 17:23:44.838 239460 DEBUG oslo_concurrency.lockutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:44 np0005601226 nova_compute[239456]: 2026-01-29 17:23:44.839 239460 DEBUG oslo_concurrency.lockutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:44 np0005601226 nova_compute[239456]: 2026-01-29 17:23:44.860 239460 DEBUG nova.objects.instance [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lazy-loading 'flavor' on Instance uuid 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:44 np0005601226 nova_compute[239456]: 2026-01-29 17:23:44.886 239460 INFO nova.virt.libvirt.driver [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Ignoring supplied device name: /dev/vdb#033[00m
Jan 29 12:23:44 np0005601226 podman[258191]: 2026-01-29 17:23:44.896879282 +0000 UTC m=+0.069227658 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 29 12:23:44 np0005601226 nova_compute[239456]: 2026-01-29 17:23:44.905 239460 DEBUG oslo_concurrency.lockutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:44 np0005601226 podman[258192]: 2026-01-29 17:23:44.908120176 +0000 UTC m=+0.080758500 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.127 239460 DEBUG oslo_concurrency.lockutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.128 239460 DEBUG oslo_concurrency.lockutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.128 239460 INFO nova.compute.manager [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Attaching volume af0c40c8-1902-458d-86b4-eea35f573e4f to /dev/vdb#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.292 239460 DEBUG os_brick.utils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.294 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.301 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.302 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[72f82ede-8efa-43a2-b974-6e98b3762a9e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.304 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.311 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.312 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[5b69277f-3d14-4bd7-8175-84d74a821bd9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.313 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.317 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.318 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6b1ec7-253b-4c23-8f39-fc1266d4c88b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.319 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5eb052-72fc-453f-9615-525b7532bc10]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.320 239460 DEBUG oslo_concurrency.processutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.341 239460 DEBUG oslo_concurrency.processutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.344 239460 DEBUG os_brick.initiator.connectors.lightos [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.344 239460 DEBUG os_brick.initiator.connectors.lightos [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.345 239460 DEBUG os_brick.initiator.connectors.lightos [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.346 239460 DEBUG os_brick.utils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:23:45 np0005601226 nova_compute[239456]: 2026-01-29 17:23:45.346 239460 DEBUG nova.virt.block_device [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Updating existing volume attachment record: f95ac685-c23e-40a1-bc02-9b774eaff037 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:23:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 33 KiB/s wr, 99 op/s
Jan 29 12:23:45 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:45.843 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.093 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:23:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3968675402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:23:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.319 239460 DEBUG nova.objects.instance [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lazy-loading 'flavor' on Instance uuid 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.346 239460 DEBUG nova.virt.libvirt.driver [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Attempting to attach volume af0c40c8-1902-458d-86b4-eea35f573e4f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.348 239460 DEBUG nova.virt.libvirt.guest [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:23:46 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:23:46 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-af0c40c8-1902-458d-86b4-eea35f573e4f">
Jan 29 12:23:46 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:46 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:23:46 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:23:46 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:23:46 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:23:46 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:23:46 np0005601226 nova_compute[239456]:  <serial>af0c40c8-1902-458d-86b4-eea35f573e4f</serial>
Jan 29 12:23:46 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:23:46 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.474 239460 DEBUG nova.virt.libvirt.driver [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.475 239460 DEBUG nova.virt.libvirt.driver [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.475 239460 DEBUG nova.virt.libvirt.driver [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.475 239460 DEBUG nova.virt.libvirt.driver [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] No VIF found with MAC fa:16:3e:41:16:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:23:46 np0005601226 nova_compute[239456]: 2026-01-29 17:23:46.699 239460 DEBUG oslo_concurrency.lockutils [None req-bea42e83-0ce2-4c2b-be6d-5e85df719dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 30 KiB/s wr, 89 op/s
Jan 29 12:23:48 np0005601226 nova_compute[239456]: 2026-01-29 17:23:48.389 239460 DEBUG nova.compute.manager [req-0cc91de0-4af1-44ac-8065-8ae8fe68344d req-0458886d-9798-4f3a-b953-0153754f60b0 f7f84f7dfbf74cf187f5c0105813e958 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event volume-extended-af0c40c8-1902-458d-86b4-eea35f573e4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:48 np0005601226 nova_compute[239456]: 2026-01-29 17:23:48.410 239460 DEBUG nova.compute.manager [req-0cc91de0-4af1-44ac-8065-8ae8fe68344d req-0458886d-9798-4f3a-b953-0153754f60b0 f7f84f7dfbf74cf187f5c0105813e958 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Handling volume-extended event for volume af0c40c8-1902-458d-86b4-eea35f573e4f extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896#033[00m
Jan 29 12:23:48 np0005601226 nova_compute[239456]: 2026-01-29 17:23:48.426 239460 INFO nova.compute.manager [req-0cc91de0-4af1-44ac-8065-8ae8fe68344d req-0458886d-9798-4f3a-b953-0153754f60b0 f7f84f7dfbf74cf187f5c0105813e958 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Cinder extended volume af0c40c8-1902-458d-86b4-eea35f573e4f; extending it to detect new size#033[00m
Jan 29 12:23:48 np0005601226 nova_compute[239456]: 2026-01-29 17:23:48.559 239460 DEBUG nova.virt.libvirt.driver [req-0cc91de0-4af1-44ac-8065-8ae8fe68344d req-0458886d-9798-4f3a-b953-0153754f60b0 f7f84f7dfbf74cf187f5c0105813e958 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.127 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.533 239460 DEBUG oslo_concurrency.lockutils [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.533 239460 DEBUG oslo_concurrency.lockutils [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.547 239460 INFO nova.compute.manager [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Detaching volume af0c40c8-1902-458d-86b4-eea35f573e4f#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.658 239460 INFO nova.virt.block_device [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Attempting to driver detach volume af0c40c8-1902-458d-86b4-eea35f573e4f from mountpoint /dev/vdb#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.665 239460 DEBUG nova.virt.libvirt.driver [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Attempting to detach device vdb from instance 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.665 239460 DEBUG nova.virt.libvirt.guest [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-af0c40c8-1902-458d-86b4-eea35f573e4f">
Jan 29 12:23:49 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <serial>af0c40c8-1902-458d-86b4-eea35f573e4f</serial>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:23:49 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.672 239460 INFO nova.virt.libvirt.driver [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Successfully detached device vdb from instance 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 from the persistent domain config.#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.673 239460 DEBUG nova.virt.libvirt.driver [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.673 239460 DEBUG nova.virt.libvirt.guest [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-af0c40c8-1902-458d-86b4-eea35f573e4f">
Jan 29 12:23:49 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <serial>af0c40c8-1902-458d-86b4-eea35f573e4f</serial>
Jan 29 12:23:49 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:23:49 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:23:49 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:23:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 205 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.789 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707429.7894847, 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.793 239460 DEBUG nova.virt.libvirt.driver [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.795 239460 INFO nova.virt.libvirt.driver [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Successfully detached device vdb from instance 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 from the live domain config.#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.939 239460 DEBUG nova.objects.instance [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lazy-loading 'flavor' on Instance uuid 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:49 np0005601226 nova_compute[239456]: 2026-01-29 17:23:49.970 239460 DEBUG oslo_concurrency.lockutils [None req-2fd84c8f-6b14-4213-8431-0b4633b3f299 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 29 12:23:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 29 12:23:50 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.817 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.817 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.817 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.817 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.818 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.818 239460 INFO nova.compute.manager [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Terminating instance#033[00m
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.819 239460 DEBUG nova.compute.manager [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:23:50 np0005601226 kernel: tap68c13a19-1a (unregistering): left promiscuous mode
Jan 29 12:23:50 np0005601226 NetworkManager[49020]: <info>  [1769707430.8693] device (tap68c13a19-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:23:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:50Z|00111|binding|INFO|Releasing lport 68c13a19-1abc-4771-a498-863d2d0a28b1 from this chassis (sb_readonly=0)
Jan 29 12:23:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:50Z|00112|binding|INFO|Setting lport 68c13a19-1abc-4771-a498-863d2d0a28b1 down in Southbound
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.875 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:50 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:50Z|00113|binding|INFO|Removing iface tap68c13a19-1a ovn-installed in OVS
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.877 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:50.882 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:16:71 10.100.0.6'], port_security=['fa:16:3e:41:16:71 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '53e39297-e2d7-48cf-9623-7be3b0d6b2f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39d4847e7fda4ce1b3f82fb1983ae222', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f1c7e2d0-096f-4267-9955-f5e2a5e57200', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2cc3333a-e4ca-4591-8e77-46aeb7e0328b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=68c13a19-1abc-4771-a498-863d2d0a28b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:23:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:50.883 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 68c13a19-1abc-4771-a498-863d2d0a28b1 in datapath d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed unbound from our chassis#033[00m
Jan 29 12:23:50 np0005601226 nova_compute[239456]: 2026-01-29 17:23:50.884 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:50.886 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:23:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:50.887 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9513155b-62e8-4213-a4fa-999ddcc1aefe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:50.888 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed namespace which is not needed anymore#033[00m
Jan 29 12:23:50 np0005601226 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 29 12:23:50 np0005601226 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 13.117s CPU time.
Jan 29 12:23:50 np0005601226 systemd-machined[207561]: Machine qemu-9-instance-00000009 terminated.
Jan 29 12:23:50 np0005601226 neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed[256602]: [NOTICE]   (256611) : haproxy version is 2.8.14-c23fe91
Jan 29 12:23:50 np0005601226 neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed[256602]: [NOTICE]   (256611) : path to executable is /usr/sbin/haproxy
Jan 29 12:23:50 np0005601226 neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed[256602]: [WARNING]  (256611) : Exiting Master process...
Jan 29 12:23:50 np0005601226 neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed[256602]: [ALERT]    (256611) : Current worker (256613) exited with code 143 (Terminated)
Jan 29 12:23:50 np0005601226 neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed[256602]: [WARNING]  (256611) : All workers exited. Exiting... (0)
Jan 29 12:23:50 np0005601226 systemd[1]: libpod-62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c.scope: Deactivated successfully.
Jan 29 12:23:51 np0005601226 podman[258286]: 2026-01-29 17:23:51.00072539 +0000 UTC m=+0.042033471 container died 62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 29 12:23:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c-userdata-shm.mount: Deactivated successfully.
Jan 29 12:23:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9b1884ecd61cb8c2276768d945ca65e9a0b2b6bf285ea01cff9049a143e6d636-merged.mount: Deactivated successfully.
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.034 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.038 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:51 np0005601226 podman[258286]: 2026-01-29 17:23:51.046506941 +0000 UTC m=+0.087815012 container cleanup 62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.046 239460 INFO nova.virt.libvirt.driver [-] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Instance destroyed successfully.#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.047 239460 DEBUG nova.objects.instance [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lazy-loading 'resources' on Instance uuid 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:51 np0005601226 systemd[1]: libpod-conmon-62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c.scope: Deactivated successfully.
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.069 239460 DEBUG nova.virt.libvirt.vif [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:22:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-1473413740',display_name='tempest-VolumesExtendAttachedTest-instance-1473413740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-1473413740',id=9,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK4mEJSERpPIQVK3sAMeu17EWkufBq6o1JwD5SzDGHiO4Z/qUv1iUlgJH7z4vsuw0x6/IEDJafzxQjRMypF22CDgXJIieljJTYVV7/tjKuefzCG79wHpMe/YIqW+S8UZ6A==',key_name='tempest-keypair-802154011',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:23:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39d4847e7fda4ce1b3f82fb1983ae222',ramdisk_id='',reservation_id='r-003wsn2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-736874132',owner_user_name='tempest-VolumesExtendAttachedTest-736874132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:23:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e94c4027707149bebaa91488b942641b',uuid=53e39297-e2d7-48cf-9623-7be3b0d6b2f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.070 239460 DEBUG nova.network.os_vif_util [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Converting VIF {"id": "68c13a19-1abc-4771-a498-863d2d0a28b1", "address": "fa:16:3e:41:16:71", "network": {"id": "d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-566903606-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39d4847e7fda4ce1b3f82fb1983ae222", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68c13a19-1a", "ovs_interfaceid": "68c13a19-1abc-4771-a498-863d2d0a28b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.070 239460 DEBUG nova.network.os_vif_util [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:16:71,bridge_name='br-int',has_traffic_filtering=True,id=68c13a19-1abc-4771-a498-863d2d0a28b1,network=Network(d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68c13a19-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.071 239460 DEBUG os_vif [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:16:71,bridge_name='br-int',has_traffic_filtering=True,id=68c13a19-1abc-4771-a498-863d2d0a28b1,network=Network(d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68c13a19-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.072 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.072 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68c13a19-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.075 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.076 239460 INFO os_vif [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:16:71,bridge_name='br-int',has_traffic_filtering=True,id=68c13a19-1abc-4771-a498-863d2d0a28b1,network=Network(d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68c13a19-1a')#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.095 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:51Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:48:d2 10.100.0.12
Jan 29 12:23:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:51Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:48:d2 10.100.0.12
Jan 29 12:23:51 np0005601226 podman[258326]: 2026-01-29 17:23:51.113859508 +0000 UTC m=+0.051982141 container remove 62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.118 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2471673b-aeb6-4126-9100-80649738767b]: (4, ('Thu Jan 29 05:23:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed (62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c)\n62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c\nThu Jan 29 05:23:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed (62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c)\n62b03d809e250806656608c0e22c0604c976b45ca81fe6bc9aff189d19b4420c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.120 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[00d3d61d-a512-4274-8c44-bae27ba5c150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.122 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd83e49d6-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.125 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:51 np0005601226 kernel: tapd83e49d6-70: left promiscuous mode
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.135 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.138 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5afbb4-edf1-4452-a345-601e99bb1812]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.152 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ae200f10-c51c-44ae-b3a1-afa15bb40189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.153 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[91753b90-6181-489d-ac1b-5aa182fef1dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.163 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a26b7011-e2f7-4cbe-91eb-bdaf1ccf0eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473749, 'reachable_time': 36510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258359, 'error': None, 'target': 'ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.165 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d83e49d6-7d57-4ee1-97b8-d6cdd3bd57ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:23:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:23:51.165 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[207089ab-0e7b-4661-8bd8-2122e74cbd56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:51 np0005601226 systemd[1]: run-netns-ovnmeta\x2dd83e49d6\x2d7d57\x2d4ee1\x2d97b8\x2dd6cdd3bd57ed.mount: Deactivated successfully.
Jan 29 12:23:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.350 239460 INFO nova.virt.libvirt.driver [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Deleting instance files /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3_del#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.351 239460 INFO nova.virt.libvirt.driver [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Deletion of /var/lib/nova/instances/53e39297-e2d7-48cf-9623-7be3b0d6b2f3_del complete#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.428 239460 INFO nova.compute.manager [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.428 239460 DEBUG oslo.service.loopingcall [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.431 239460 DEBUG nova.compute.manager [-] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.431 239460 DEBUG nova.network.neutron [-] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012656529216613591 of space, bias 1.0, pg target 0.37969587649840775 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00025064210729641847 of space, bias 1.0, pg target 0.07519263218892554 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 9.462999057092883e-07 of space, bias 1.0, pg target 0.00028388997171278647 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660232710374118 of space, bias 1.0, pg target 0.19980698131122354 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2488059950362365e-06 of space, bias 4.0, pg target 0.0014985671940434839 quantized to 16 (current 16)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.527 239460 DEBUG nova.compute.manager [req-7ce09a03-aaec-4460-93d9-9f6dd116c72e req-a95e6dec-eade-407a-b54d-9354c9e0e05c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-vif-unplugged-68c13a19-1abc-4771-a498-863d2d0a28b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.528 239460 DEBUG oslo_concurrency.lockutils [req-7ce09a03-aaec-4460-93d9-9f6dd116c72e req-a95e6dec-eade-407a-b54d-9354c9e0e05c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.528 239460 DEBUG oslo_concurrency.lockutils [req-7ce09a03-aaec-4460-93d9-9f6dd116c72e req-a95e6dec-eade-407a-b54d-9354c9e0e05c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.528 239460 DEBUG oslo_concurrency.lockutils [req-7ce09a03-aaec-4460-93d9-9f6dd116c72e req-a95e6dec-eade-407a-b54d-9354c9e0e05c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.528 239460 DEBUG nova.compute.manager [req-7ce09a03-aaec-4460-93d9-9f6dd116c72e req-a95e6dec-eade-407a-b54d-9354c9e0e05c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] No waiting events found dispatching network-vif-unplugged-68c13a19-1abc-4771-a498-863d2d0a28b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:23:51 np0005601226 nova_compute[239456]: 2026-01-29 17:23:51.528 239460 DEBUG nova.compute.manager [req-7ce09a03-aaec-4460-93d9-9f6dd116c72e req-a95e6dec-eade-407a-b54d-9354c9e0e05c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-vif-unplugged-68c13a19-1abc-4771-a498-863d2d0a28b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:23:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 205 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Jan 29 12:23:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:23:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628467335' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:23:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:23:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628467335' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:23:52 np0005601226 nova_compute[239456]: 2026-01-29 17:23:52.390 239460 DEBUG nova.network.neutron [-] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:23:52 np0005601226 nova_compute[239456]: 2026-01-29 17:23:52.408 239460 INFO nova.compute.manager [-] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Took 0.98 seconds to deallocate network for instance.#033[00m
Jan 29 12:23:52 np0005601226 nova_compute[239456]: 2026-01-29 17:23:52.444 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:52 np0005601226 nova_compute[239456]: 2026-01-29 17:23:52.444 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:52 np0005601226 nova_compute[239456]: 2026-01-29 17:23:52.514 239460 DEBUG oslo_concurrency.processutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:23:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2138567424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:23:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:23:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044168736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:23:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:23:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2138567424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.053 239460 DEBUG oslo_concurrency.processutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.058 239460 DEBUG nova.compute.provider_tree [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.072 239460 DEBUG nova.scheduler.client.report [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.097 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.117 239460 INFO nova.scheduler.client.report [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Deleted allocations for instance 53e39297-e2d7-48cf-9623-7be3b0d6b2f3#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.177 239460 DEBUG oslo_concurrency.lockutils [None req-89f41f3c-a885-4844-ab3c-f2754c0e3dd7 e94c4027707149bebaa91488b942641b 39d4847e7fda4ce1b3f82fb1983ae222 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.626 239460 DEBUG nova.compute.manager [req-972b9b64-0a52-469c-8c40-c313fe7f40cf req-96e0679a-05a5-4ff9-804f-87b03dc32d4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.626 239460 DEBUG oslo_concurrency.lockutils [req-972b9b64-0a52-469c-8c40-c313fe7f40cf req-96e0679a-05a5-4ff9-804f-87b03dc32d4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.627 239460 DEBUG oslo_concurrency.lockutils [req-972b9b64-0a52-469c-8c40-c313fe7f40cf req-96e0679a-05a5-4ff9-804f-87b03dc32d4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.627 239460 DEBUG oslo_concurrency.lockutils [req-972b9b64-0a52-469c-8c40-c313fe7f40cf req-96e0679a-05a5-4ff9-804f-87b03dc32d4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "53e39297-e2d7-48cf-9623-7be3b0d6b2f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.627 239460 DEBUG nova.compute.manager [req-972b9b64-0a52-469c-8c40-c313fe7f40cf req-96e0679a-05a5-4ff9-804f-87b03dc32d4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] No waiting events found dispatching network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.627 239460 WARNING nova.compute.manager [req-972b9b64-0a52-469c-8c40-c313fe7f40cf req-96e0679a-05a5-4ff9-804f-87b03dc32d4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received unexpected event network-vif-plugged-68c13a19-1abc-4771-a498-863d2d0a28b1 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:23:53 np0005601226 nova_compute[239456]: 2026-01-29 17:23:53.627 239460 DEBUG nova.compute.manager [req-972b9b64-0a52-469c-8c40-c313fe7f40cf req-96e0679a-05a5-4ff9-804f-87b03dc32d4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Received event network-vif-deleted-68c13a19-1abc-4771-a498-863d2d0a28b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:23:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 204 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.0 MiB/s wr, 155 op/s
Jan 29 12:23:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:23:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2870221086' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:23:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:23:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2870221086' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:23:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 29 12:23:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 29 12:23:55 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 29 12:23:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 167 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.9 MiB/s wr, 276 op/s
Jan 29 12:23:56 np0005601226 nova_compute[239456]: 2026-01-29 17:23:56.076 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:56 np0005601226 nova_compute[239456]: 2026-01-29 17:23:56.097 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:23:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:23:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141705081' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:23:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:23:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1141705081' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.321 239460 DEBUG oslo_concurrency.lockutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.322 239460 DEBUG oslo_concurrency.lockutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.339 239460 DEBUG nova.objects.instance [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'flavor' on Instance uuid 54ae1aee-2aec-49fb-981c-904cceb59a9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.375 239460 DEBUG oslo_concurrency.lockutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.564 239460 DEBUG oslo_concurrency.lockutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.565 239460 DEBUG oslo_concurrency.lockutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.565 239460 INFO nova.compute.manager [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Attaching volume 64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5 to /dev/vdb#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.704 239460 DEBUG os_brick.utils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.705 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.713 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.713 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[b8408799-14c9-42bb-be53-fb181d528c61]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.714 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.719 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.719 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2c69ba-deac-4b9a-bd9e-875ae2942e88]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.720 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.726 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.726 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcae8cd-3b8a-49dc-9546-adde0b95febc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.727 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[58f8d3e8-3f2e-4254-804b-ea5f5625a30a]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.728 239460 DEBUG oslo_concurrency.processutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.744 239460 DEBUG oslo_concurrency.processutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.746 239460 DEBUG os_brick.initiator.connectors.lightos [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.746 239460 DEBUG os_brick.initiator.connectors.lightos [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.746 239460 DEBUG os_brick.initiator.connectors.lightos [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.747 239460 DEBUG os_brick.utils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] <== get_connector_properties: return (42ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:23:57 np0005601226 nova_compute[239456]: 2026-01-29 17:23:57.747 239460 DEBUG nova.virt.block_device [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updating existing volume attachment record: df1ae0fb-4f1b-4959-92e9-b0fdcf5a5c8c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:23:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 167 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 496 KiB/s rd, 2.8 MiB/s wr, 204 op/s
Jan 29 12:23:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:23:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688757125' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:23:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:23:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3688757125' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:23:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:23:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828646378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.560 239460 DEBUG nova.objects.instance [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'flavor' on Instance uuid 54ae1aee-2aec-49fb-981c-904cceb59a9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.582 239460 DEBUG nova.virt.libvirt.driver [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Attempting to attach volume 64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.585 239460 DEBUG nova.virt.libvirt.guest [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:23:58 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:23:58 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5">
Jan 29 12:23:58 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:23:58 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:23:58 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:23:58 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:23:58 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:23:58 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:23:58 np0005601226 nova_compute[239456]:  <serial>64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5</serial>
Jan 29 12:23:58 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:23:58 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.679 239460 DEBUG nova.virt.libvirt.driver [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.680 239460 DEBUG nova.virt.libvirt.driver [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.680 239460 DEBUG nova.virt.libvirt.driver [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.680 239460 DEBUG nova.virt.libvirt.driver [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No VIF found with MAC fa:16:3e:c3:48:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:23:58 np0005601226 ovn_controller[145556]: 2026-01-29T17:23:58Z|00114|binding|INFO|Releasing lport 0442c862-051a-4100-a371-ef7e19ea6eba from this chassis (sb_readonly=0)
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.796 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:23:58 np0005601226 nova_compute[239456]: 2026-01-29 17:23:58.905 239460 DEBUG oslo_concurrency.lockutils [None req-dcd7bc45-7067-4ce1-a22c-7e9828bb9f28 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:23:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 501 KiB/s rd, 2.5 MiB/s wr, 266 op/s
Jan 29 12:24:01 np0005601226 nova_compute[239456]: 2026-01-29 17:24:01.080 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:01 np0005601226 nova_compute[239456]: 2026-01-29 17:24:01.099 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 458 KiB/s rd, 2.2 MiB/s wr, 243 op/s
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.181 239460 DEBUG oslo_concurrency.lockutils [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.181 239460 DEBUG oslo_concurrency.lockutils [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.233 239460 INFO nova.compute.manager [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Detaching volume 64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.341 239460 INFO nova.virt.block_device [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Attempting to driver detach volume 64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5 from mountpoint /dev/vdb#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.350 239460 DEBUG nova.virt.libvirt.driver [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Attempting to detach device vdb from instance 54ae1aee-2aec-49fb-981c-904cceb59a9d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.351 239460 DEBUG nova.virt.libvirt.guest [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5">
Jan 29 12:24:02 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <serial>64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5</serial>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:24:02 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.357 239460 INFO nova.virt.libvirt.driver [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully detached device vdb from instance 54ae1aee-2aec-49fb-981c-904cceb59a9d from the persistent domain config.#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.358 239460 DEBUG nova.virt.libvirt.driver [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 54ae1aee-2aec-49fb-981c-904cceb59a9d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.358 239460 DEBUG nova.virt.libvirt.guest [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5">
Jan 29 12:24:02 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <serial>64b0fc3e-59ac-49fb-9e9e-ecec1fd09ec5</serial>
Jan 29 12:24:02 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:24:02 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:24:02 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.464 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707442.464542, 54ae1aee-2aec-49fb-981c-904cceb59a9d => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.466 239460 DEBUG nova.virt.libvirt.driver [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 54ae1aee-2aec-49fb-981c-904cceb59a9d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.468 239460 INFO nova.virt.libvirt.driver [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully detached device vdb from instance 54ae1aee-2aec-49fb-981c-904cceb59a9d from the live domain config.#033[00m
Jan 29 12:24:02 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:02Z|00115|binding|INFO|Releasing lport 0442c862-051a-4100-a371-ef7e19ea6eba from this chassis (sb_readonly=0)
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.716 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.785 239460 DEBUG nova.objects.instance [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'flavor' on Instance uuid 54ae1aee-2aec-49fb-981c-904cceb59a9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:02 np0005601226 nova_compute[239456]: 2026-01-29 17:24:02.831 239460 DEBUG oslo_concurrency.lockutils [None req-e0995b2b-308b-4e1a-83c9-6aa514528480 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 714 KiB/s wr, 175 op/s
Jan 29 12:24:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 29 12:24:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 29 12:24:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 29 12:24:05 np0005601226 nova_compute[239456]: 2026-01-29 17:24:05.254 239460 DEBUG nova.compute.manager [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:05 np0005601226 nova_compute[239456]: 2026-01-29 17:24:05.300 239460 INFO nova.compute.manager [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] instance snapshotting#033[00m
Jan 29 12:24:05 np0005601226 nova_compute[239456]: 2026-01-29 17:24:05.583 239460 INFO nova.virt.libvirt.driver [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Beginning live snapshot process#033[00m
Jan 29 12:24:05 np0005601226 nova_compute[239456]: 2026-01-29 17:24:05.715 239460 DEBUG nova.virt.libvirt.imagebackend [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No parent info for 71879218-5462-43bb-aba6-6319695b24fd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 29 12:24:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 169 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 249 KiB/s wr, 91 op/s
Jan 29 12:24:05 np0005601226 nova_compute[239456]: 2026-01-29 17:24:05.890 239460 DEBUG nova.storage.rbd_utils [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] creating snapshot(4354aedc14884c4d84badb280887e8fe) on rbd image(54ae1aee-2aec-49fb-981c-904cceb59a9d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 29 12:24:06 np0005601226 nova_compute[239456]: 2026-01-29 17:24:06.045 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707431.0443773, 53e39297-e2d7-48cf-9623-7be3b0d6b2f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:24:06 np0005601226 nova_compute[239456]: 2026-01-29 17:24:06.046 239460 INFO nova.compute.manager [-] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:24:06 np0005601226 nova_compute[239456]: 2026-01-29 17:24:06.064 239460 DEBUG nova.compute.manager [None req-42aa69f9-ba06-458f-a86d-a40118a5f51e - - - - - -] [instance: 53e39297-e2d7-48cf-9623-7be3b0d6b2f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:06 np0005601226 nova_compute[239456]: 2026-01-29 17:24:06.084 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:06 np0005601226 nova_compute[239456]: 2026-01-29 17:24:06.101 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 29 12:24:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 29 12:24:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 29 12:24:06 np0005601226 nova_compute[239456]: 2026-01-29 17:24:06.386 239460 DEBUG nova.storage.rbd_utils [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] cloning vms/54ae1aee-2aec-49fb-981c-904cceb59a9d_disk@4354aedc14884c4d84badb280887e8fe to images/6c19a175-0f51-4960-b93b-bdb33e6773d5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 29 12:24:07 np0005601226 nova_compute[239456]: 2026-01-29 17:24:07.154 239460 DEBUG nova.storage.rbd_utils [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] flattening images/6c19a175-0f51-4960-b93b-bdb33e6773d5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 29 12:24:07 np0005601226 nova_compute[239456]: 2026-01-29 17:24:07.686 239460 DEBUG nova.storage.rbd_utils [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] removing snapshot(4354aedc14884c4d84badb280887e8fe) on rbd image(54ae1aee-2aec-49fb-981c-904cceb59a9d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 29 12:24:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 169 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 289 KiB/s wr, 13 op/s
Jan 29 12:24:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 29 12:24:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 29 12:24:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 29 12:24:08 np0005601226 nova_compute[239456]: 2026-01-29 17:24:08.432 239460 DEBUG nova.storage.rbd_utils [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] creating snapshot(snap) on rbd image(6c19a175-0f51-4960-b93b-bdb33e6773d5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 29 12:24:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 29 12:24:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 29 12:24:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 29 12:24:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 8.6 MiB/s wr, 190 op/s
Jan 29 12:24:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:24:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:24:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:24:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:24:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:24:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:24:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/545604931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/545604931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:10 np0005601226 nova_compute[239456]: 2026-01-29 17:24:10.693 239460 INFO nova.virt.libvirt.driver [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Snapshot image upload complete#033[00m
Jan 29 12:24:10 np0005601226 nova_compute[239456]: 2026-01-29 17:24:10.693 239460 INFO nova.compute.manager [None req-20a05efc-9d3c-41cd-997f-768d860e8b51 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Took 5.39 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 29 12:24:11 np0005601226 nova_compute[239456]: 2026-01-29 17:24:11.087 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:11 np0005601226 nova_compute[239456]: 2026-01-29 17:24:11.103 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 172 op/s
Jan 29 12:24:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/917373310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/917373310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.889770) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707452889836, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1094, "num_deletes": 505, "total_data_size": 1063819, "memory_usage": 1095888, "flush_reason": "Manual Compaction"}
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707452902887, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 844412, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25544, "largest_seqno": 26637, "table_properties": {"data_size": 839695, "index_size": 1793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14498, "raw_average_key_size": 19, "raw_value_size": 827999, "raw_average_value_size": 1128, "num_data_blocks": 79, "num_entries": 734, "num_filter_entries": 734, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707402, "oldest_key_time": 1769707402, "file_creation_time": 1769707452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 13157 microseconds, and 2414 cpu microseconds.
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.902933) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 844412 bytes OK
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.902951) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.919727) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.919778) EVENT_LOG_v1 {"time_micros": 1769707452919770, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.919803) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1057564, prev total WAL file size 1057564, number of live WAL files 2.
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.920474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(824KB)], [56(11MB)]
Jan 29 12:24:12 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707452920533, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 13105052, "oldest_snapshot_seqno": -1}
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5205 keys, 8158954 bytes, temperature: kUnknown
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707453003397, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 8158954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8121822, "index_size": 23006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 131035, "raw_average_key_size": 25, "raw_value_size": 8025579, "raw_average_value_size": 1541, "num_data_blocks": 935, "num_entries": 5205, "num_filter_entries": 5205, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.003634) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 8158954 bytes
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.015217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 98.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.7 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(25.2) write-amplify(9.7) OK, records in: 6215, records dropped: 1010 output_compression: NoCompression
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.015240) EVENT_LOG_v1 {"time_micros": 1769707453015230, "job": 30, "event": "compaction_finished", "compaction_time_micros": 82969, "compaction_time_cpu_micros": 18223, "output_level": 6, "num_output_files": 1, "total_output_size": 8158954, "num_input_records": 6215, "num_output_records": 5205, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707453015467, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707453016412, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:12.920352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.016458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.016461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.016463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.016465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:24:13.016467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924480286' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/924480286' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.511 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.511 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.548 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.673 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.674 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.681 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.682 239460 INFO nova.compute.claims [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:24:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.8 MiB/s wr, 167 op/s
Jan 29 12:24:13 np0005601226 nova_compute[239456]: 2026-01-29 17:24:13.835 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:24:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544828811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.371 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.377 239460 DEBUG nova.compute.provider_tree [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.456 239460 DEBUG nova.scheduler.client.report [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.487 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.488 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.679 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.680 239460 DEBUG nova.network.neutron [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.822 239460 INFO nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.972 239460 DEBUG nova.policy [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '66a034221acf4c559a731fcc84a54c53', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f2a1daea29d845c4b1c58f0e6610e767', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:24:14 np0005601226 nova_compute[239456]: 2026-01-29 17:24:14.975 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.196 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.197 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.198 239460 INFO nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Creating image(s)#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.221 239460 DEBUG nova.storage.rbd_utils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.251 239460 DEBUG nova.storage.rbd_utils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.274 239460 DEBUG nova.storage.rbd_utils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.278 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "31af49a2e81e0ea44bc56277a5eb1fdb7a2037e8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.278 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "31af49a2e81e0ea44bc56277a5eb1fdb7a2037e8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.545 239460 DEBUG nova.virt.libvirt.imagebackend [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Image locations are: [{'url': 'rbd://cc5c72e3-31e0-58b9-8731-456117d38f4a/images/6c19a175-0f51-4960-b93b-bdb33e6773d5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cc5c72e3-31e0-58b9-8731-456117d38f4a/images/6c19a175-0f51-4960-b93b-bdb33e6773d5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 29 12:24:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.614 239460 DEBUG nova.virt.libvirt.imagebackend [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Selected location: {'url': 'rbd://cc5c72e3-31e0-58b9-8731-456117d38f4a/images/6c19a175-0f51-4960-b93b-bdb33e6773d5/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 29 12:24:15 np0005601226 nova_compute[239456]: 2026-01-29 17:24:15.614 239460 DEBUG nova.storage.rbd_utils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] cloning images/6c19a175-0f51-4960-b93b-bdb33e6773d5@snap to None/0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 29 12:24:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 29 12:24:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 29 12:24:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 248 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.8 MiB/s wr, 290 op/s
Jan 29 12:24:15 np0005601226 podman[258696]: 2026-01-29 17:24:15.908181814 +0000 UTC m=+0.067478091 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:24:15 np0005601226 podman[258697]: 2026-01-29 17:24:15.908604045 +0000 UTC m=+0.066331110 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.089 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.104 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.420 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "31af49a2e81e0ea44bc56277a5eb1fdb7a2037e8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.514 239460 DEBUG nova.objects.instance [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'migration_context' on Instance uuid 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.543 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.544 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Ensure instance console log exists: /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.544 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.544 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.544 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:16 np0005601226 nova_compute[239456]: 2026-01-29 17:24:16.611 239460 DEBUG nova.network.neutron [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Successfully created port: 2f63240d-7525-40fb-b23f-9ab98ab1f446 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:24:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 248 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 2.6 MiB/s wr, 155 op/s
Jan 29 12:24:17 np0005601226 nova_compute[239456]: 2026-01-29 17:24:17.870 239460 DEBUG nova.network.neutron [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Successfully updated port: 2f63240d-7525-40fb-b23f-9ab98ab1f446 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:24:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 29 12:24:17 np0005601226 nova_compute[239456]: 2026-01-29 17:24:17.992 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:24:17 np0005601226 nova_compute[239456]: 2026-01-29 17:24:17.992 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquired lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:24:17 np0005601226 nova_compute[239456]: 2026-01-29 17:24:17.992 239460 DEBUG nova.network.neutron [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:24:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 29 12:24:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 29 12:24:18 np0005601226 nova_compute[239456]: 2026-01-29 17:24:18.076 239460 DEBUG nova.compute.manager [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-changed-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:24:18 np0005601226 nova_compute[239456]: 2026-01-29 17:24:18.076 239460 DEBUG nova.compute.manager [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Refreshing instance network info cache due to event network-changed-2f63240d-7525-40fb-b23f-9ab98ab1f446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:24:18 np0005601226 nova_compute[239456]: 2026-01-29 17:24:18.077 239460 DEBUG oslo_concurrency.lockutils [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:24:18 np0005601226 nova_compute[239456]: 2026-01-29 17:24:18.328 239460 DEBUG nova.network.neutron [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:24:18 np0005601226 nova_compute[239456]: 2026-01-29 17:24:18.987 239460 DEBUG nova.network.neutron [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updating instance_info_cache with network_info: [{"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.103 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Releasing lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.104 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Instance network_info: |[{"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.104 239460 DEBUG oslo_concurrency.lockutils [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.105 239460 DEBUG nova.network.neutron [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Refreshing network info cache for port 2f63240d-7525-40fb-b23f-9ab98ab1f446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.107 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Start _get_guest_xml network_info=[{"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-29T17:24:05Z,direct_url=<?>,disk_format='raw',id=6c19a175-0f51-4960-b93b-bdb33e6773d5,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1540656652',owner='f2a1daea29d845c4b1c58f0e6610e767',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-29T17:24:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '6c19a175-0f51-4960-b93b-bdb33e6773d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.111 239460 WARNING nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.117 239460 DEBUG nova.virt.libvirt.host [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.118 239460 DEBUG nova.virt.libvirt.host [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.122 239460 DEBUG nova.virt.libvirt.host [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.123 239460 DEBUG nova.virt.libvirt.host [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.123 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.124 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-29T17:24:05Z,direct_url=<?>,disk_format='raw',id=6c19a175-0f51-4960-b93b-bdb33e6773d5,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1540656652',owner='f2a1daea29d845c4b1c58f0e6610e767',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-29T17:24:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.124 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.124 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.125 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.125 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.125 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.125 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.126 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.126 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.126 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.126 239460 DEBUG nova.virt.hardware [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.129 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:24:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707200189' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.735 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.753 239460 DEBUG nova.storage.rbd_utils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:19 np0005601226 nova_compute[239456]: 2026-01-29 17:24:19.756 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 2.7 MiB/s wr, 247 op/s
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.268 239460 DEBUG nova.network.neutron [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updated VIF entry in instance network info cache for port 2f63240d-7525-40fb-b23f-9ab98ab1f446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.269 239460 DEBUG nova.network.neutron [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updating instance_info_cache with network_info: [{"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:24:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:24:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745936572' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.285 239460 DEBUG oslo_concurrency.lockutils [req-3a6d56a7-fcf3-46d3-a696-7f61fb565b1f req-e232b30d-5c88-477f-8399-9676be3e892c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.293 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.295 239460 DEBUG nova.virt.libvirt.vif [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:24:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1118559770',display_name='tempest-TestStampPattern-server-1118559770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1118559770',id=11,image_ref='6c19a175-0f51-4960-b93b-bdb33e6773d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCntQFYGg1tN9Lkltvq06uP6PbTSdiUSw2rpV4DVMQfDXGCpCCbqNspsVT5fc2Gf5/3l4zc3WW9mGuuTy6awOxbpJd54hg8vvKJT9WsymmM3odJoG0L/624VsKwRCgcRrg==',key_name='tempest-TestStampPattern-1877159583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2a1daea29d845c4b1c58f0e6610e767',ramdisk_id='',reservation_id='r-rwib1oa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='54ae1aee-2aec-49fb-981c-904cceb59a9d',image_min_disk='1',image_min_ram='0',image_owner_id='f2a1daea29d845c4b1c58f0e6610e767',image_owner_project_name='tempest-TestStampPattern-907219493',image_owner_user_name='tempest-TestStampPattern-907219493-project-member',image_user_id='66a034221acf4c559a731fcc84a54c53',network_allocated='True',owner_project_name='tempest-TestStampPattern-907219493',owner_user_name='tempest-TestStampPattern-907219493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:24:15Z,user_data=None,user_id='66a034221acf4c559a731fcc84a54c53',uuid=0ac4b31b-2f69-4c16-997b-57dc53aa29b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.295 239460 DEBUG nova.network.os_vif_util [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converting VIF {"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.296 239460 DEBUG nova.network.os_vif_util [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:25:d7,bridge_name='br-int',has_traffic_filtering=True,id=2f63240d-7525-40fb-b23f-9ab98ab1f446,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f63240d-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.297 239460 DEBUG nova.objects.instance [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.312 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <uuid>0ac4b31b-2f69-4c16-997b-57dc53aa29b2</uuid>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <name>instance-0000000b</name>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestStampPattern-server-1118559770</nova:name>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:24:19</nova:creationTime>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:user uuid="66a034221acf4c559a731fcc84a54c53">tempest-TestStampPattern-907219493-project-member</nova:user>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:project uuid="f2a1daea29d845c4b1c58f0e6610e767">tempest-TestStampPattern-907219493</nova:project>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="6c19a175-0f51-4960-b93b-bdb33e6773d5"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <nova:port uuid="2f63240d-7525-40fb-b23f-9ab98ab1f446">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <entry name="serial">0ac4b31b-2f69-4c16-997b-57dc53aa29b2</entry>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <entry name="uuid">0ac4b31b-2f69-4c16-997b-57dc53aa29b2</entry>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk.config">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:a6:25:d7"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <target dev="tap2f63240d-75"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/console.log" append="off"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <input type="keyboard" bus="usb"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:24:20 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:24:20 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:24:20 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:24:20 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.313 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Preparing to wait for external event network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.314 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.314 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.314 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.315 239460 DEBUG nova.virt.libvirt.vif [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:24:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1118559770',display_name='tempest-TestStampPattern-server-1118559770',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1118559770',id=11,image_ref='6c19a175-0f51-4960-b93b-bdb33e6773d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCntQFYGg1tN9Lkltvq06uP6PbTSdiUSw2rpV4DVMQfDXGCpCCbqNspsVT5fc2Gf5/3l4zc3WW9mGuuTy6awOxbpJd54hg8vvKJT9WsymmM3odJoG0L/624VsKwRCgcRrg==',key_name='tempest-TestStampPattern-1877159583',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2a1daea29d845c4b1c58f0e6610e767',ramdisk_id='',reservation_id='r-rwib1oa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='54ae1aee-2aec-49fb-981c-904cceb59a9d',image_min_disk='1',image_min_ram='0',image_owner_id='f2a1daea29d845c4b1c58f0e6610e767',image_owner_project_name='tempest-TestStampPattern-907219493',image_owner_user_name='tempest-TestStampPattern-907219493-project-member',image_user_id='66a034221acf4c559a731fcc84a54c53',network_allocated='True',owner_project_name='tempest-TestStampPattern-907219493',owner_user_name='tempest-TestStampPattern-907219493-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:24:15Z,user_data=None,user_id='66a034221acf4c559a731fcc84a54c53',uuid=0ac4b31b-2f69-4c16-997b-57dc53aa29b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.315 239460 DEBUG nova.network.os_vif_util [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converting VIF {"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.316 239460 DEBUG nova.network.os_vif_util [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:25:d7,bridge_name='br-int',has_traffic_filtering=True,id=2f63240d-7525-40fb-b23f-9ab98ab1f446,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f63240d-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.317 239460 DEBUG os_vif [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:25:d7,bridge_name='br-int',has_traffic_filtering=True,id=2f63240d-7525-40fb-b23f-9ab98ab1f446,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f63240d-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.319 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.319 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.320 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.322 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.323 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f63240d-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.323 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f63240d-75, col_values=(('external_ids', {'iface-id': '2f63240d-7525-40fb-b23f-9ab98ab1f446', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:25:d7', 'vm-uuid': '0ac4b31b-2f69-4c16-997b-57dc53aa29b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.325 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:20 np0005601226 NetworkManager[49020]: <info>  [1769707460.3259] manager: (tap2f63240d-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.327 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.331 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.332 239460 INFO os_vif [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:25:d7,bridge_name='br-int',has_traffic_filtering=True,id=2f63240d-7525-40fb-b23f-9ab98ab1f446,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f63240d-75')#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.465 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.466 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.466 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No VIF found with MAC fa:16:3e:a6:25:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.467 239460 INFO nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Using config drive#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.489 239460 DEBUG nova.storage.rbd_utils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.951 239460 INFO nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Creating config drive at /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/disk.config#033[00m
Jan 29 12:24:20 np0005601226 nova_compute[239456]: 2026-01-29 17:24:20.954 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmper647bru execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.073 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmper647bru" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.094 239460 DEBUG nova.storage.rbd_utils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] rbd image 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.097 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/disk.config 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.111 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2234947977' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2234947977' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.206 239460 DEBUG oslo_concurrency.processutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/disk.config 0ac4b31b-2f69-4c16-997b-57dc53aa29b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.206 239460 INFO nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Deleting local config drive /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2/disk.config because it was imported into RBD.#033[00m
Jan 29 12:24:21 np0005601226 kernel: tap2f63240d-75: entered promiscuous mode
Jan 29 12:24:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:21Z|00116|binding|INFO|Claiming lport 2f63240d-7525-40fb-b23f-9ab98ab1f446 for this chassis.
Jan 29 12:24:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:21Z|00117|binding|INFO|2f63240d-7525-40fb-b23f-9ab98ab1f446: Claiming fa:16:3e:a6:25:d7 10.100.0.6
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.238 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:21 np0005601226 NetworkManager[49020]: <info>  [1769707461.2387] manager: (tap2f63240d-75): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Jan 29 12:24:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:21Z|00118|binding|INFO|Setting lport 2f63240d-7525-40fb-b23f-9ab98ab1f446 ovn-installed in OVS
Jan 29 12:24:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:21Z|00119|binding|INFO|Setting lport 2f63240d-7525-40fb-b23f-9ab98ab1f446 up in Southbound
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.246 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:25:d7 10.100.0.6'], port_security=['fa:16:3e:a6:25:d7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0ac4b31b-2f69-4c16-997b-57dc53aa29b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2a1daea29d845c4b1c58f0e6610e767', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58fc09dd-a146-490e-a131-265322bed80e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627df87c-0fcf-4d89-b573-9b0d1cecf486, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=2f63240d-7525-40fb-b23f-9ab98ab1f446) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.247 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 2f63240d-7525-40fb-b23f-9ab98ab1f446 in datapath 3c884cc1-e1d2-418b-8bb8-bae78dab7018 bound to our chassis#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.249 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c884cc1-e1d2-418b-8bb8-bae78dab7018#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.248 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.260 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[22e8d623-7857-4e25-9924-5a809787d61f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:21 np0005601226 systemd-machined[207561]: New machine qemu-11-instance-0000000b.
Jan 29 12:24:21 np0005601226 systemd-udevd[258932]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:24:21 np0005601226 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Jan 29 12:24:21 np0005601226 NetworkManager[49020]: <info>  [1769707461.2743] device (tap2f63240d-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:24:21 np0005601226 NetworkManager[49020]: <info>  [1769707461.2751] device (tap2f63240d-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:24:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.282 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[37c0db53-ae3b-4d88-b3c8-e42491407a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.285 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa7518a-9fbd-4c20-b048-b135ae47323a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.303 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[e8702c2b-d0d1-4120-859c-b8a5405a5def]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.315 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f54355-9e3d-4935-9d23-01d81d18f1e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c884cc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:2e:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477058, 'reachable_time': 17483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258944, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.326 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3fed25cd-26da-46ab-b4ab-515b78192955]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c884cc1-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477065, 'tstamp': 477065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258945, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c884cc1-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477067, 'tstamp': 477067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258945, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.328 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c884cc1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.329 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.331 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.331 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c884cc1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.332 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.332 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c884cc1-e0, col_values=(('external_ids', {'iface-id': '0442c862-051a-4100-a371-ef7e19ea6eba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:24:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:21.332 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:24:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 29 12:24:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.524 239460 DEBUG nova.compute.manager [req-21b94d1c-c01e-4160-a93c-82d24e533a4c req-4a46206f-b791-4ef4-b838-3d302462f6e8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.525 239460 DEBUG oslo_concurrency.lockutils [req-21b94d1c-c01e-4160-a93c-82d24e533a4c req-4a46206f-b791-4ef4-b838-3d302462f6e8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.525 239460 DEBUG oslo_concurrency.lockutils [req-21b94d1c-c01e-4160-a93c-82d24e533a4c req-4a46206f-b791-4ef4-b838-3d302462f6e8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.525 239460 DEBUG oslo_concurrency.lockutils [req-21b94d1c-c01e-4160-a93c-82d24e533a4c req-4a46206f-b791-4ef4-b838-3d302462f6e8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:21 np0005601226 nova_compute[239456]: 2026-01-29 17:24:21.525 239460 DEBUG nova.compute.manager [req-21b94d1c-c01e-4160-a93c-82d24e533a4c req-4a46206f-b791-4ef4-b838-3d302462f6e8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Processing event network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:24:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 6.0 KiB/s wr, 112 op/s
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257054694' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1257054694' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.436 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.436 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707462.4356866, 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.437 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] VM Started (Lifecycle Event)#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.439 239460 DEBUG nova.virt.libvirt.driver [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.442 239460 INFO nova.virt.libvirt.driver [-] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Instance spawned successfully.#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.442 239460 INFO nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Took 7.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.443 239460 DEBUG nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.467 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.470 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.496 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.496 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707462.4394844, 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.496 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.506 239460 INFO nova.compute.manager [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Took 8.86 seconds to build instance.#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.517 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.520 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707462.4396122, 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.520 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.523 239460 DEBUG oslo_concurrency.lockutils [None req-51a17caa-6b9f-48f7-96e5-1b4b7ae25366 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.539 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:22 np0005601226 nova_compute[239456]: 2026-01-29 17:24:22.542 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802478362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802478362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271546584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271546584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:23 np0005601226 nova_compute[239456]: 2026-01-29 17:24:23.697 239460 DEBUG nova.compute.manager [req-881971e8-0079-4f39-bad3-1f4ba6b4f29d req-447ce35c-3c3b-4214-a1ad-79c68cea475d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:24:23 np0005601226 nova_compute[239456]: 2026-01-29 17:24:23.697 239460 DEBUG oslo_concurrency.lockutils [req-881971e8-0079-4f39-bad3-1f4ba6b4f29d req-447ce35c-3c3b-4214-a1ad-79c68cea475d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:23 np0005601226 nova_compute[239456]: 2026-01-29 17:24:23.698 239460 DEBUG oslo_concurrency.lockutils [req-881971e8-0079-4f39-bad3-1f4ba6b4f29d req-447ce35c-3c3b-4214-a1ad-79c68cea475d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:23 np0005601226 nova_compute[239456]: 2026-01-29 17:24:23.698 239460 DEBUG oslo_concurrency.lockutils [req-881971e8-0079-4f39-bad3-1f4ba6b4f29d req-447ce35c-3c3b-4214-a1ad-79c68cea475d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:23 np0005601226 nova_compute[239456]: 2026-01-29 17:24:23.698 239460 DEBUG nova.compute.manager [req-881971e8-0079-4f39-bad3-1f4ba6b4f29d req-447ce35c-3c3b-4214-a1ad-79c68cea475d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] No waiting events found dispatching network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:24:23 np0005601226 nova_compute[239456]: 2026-01-29 17:24:23.698 239460 WARNING nova.compute.manager [req-881971e8-0079-4f39-bad3-1f4ba6b4f29d req-447ce35c-3c3b-4214-a1ad-79c68cea475d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received unexpected event network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:24:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 854 KiB/s rd, 5.1 KiB/s wr, 149 op/s
Jan 29 12:24:25 np0005601226 nova_compute[239456]: 2026-01-29 17:24:25.326 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 290 op/s
Jan 29 12:24:26 np0005601226 nova_compute[239456]: 2026-01-29 17:24:26.106 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 29 12:24:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 29 12:24:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 29 12:24:26 np0005601226 nova_compute[239456]: 2026-01-29 17:24:26.889 239460 DEBUG nova.compute.manager [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-changed-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:24:26 np0005601226 nova_compute[239456]: 2026-01-29 17:24:26.889 239460 DEBUG nova.compute.manager [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Refreshing instance network info cache due to event network-changed-2f63240d-7525-40fb-b23f-9ab98ab1f446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:24:26 np0005601226 nova_compute[239456]: 2026-01-29 17:24:26.889 239460 DEBUG oslo_concurrency.lockutils [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:24:26 np0005601226 nova_compute[239456]: 2026-01-29 17:24:26.889 239460 DEBUG oslo_concurrency.lockutils [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:24:26 np0005601226 nova_compute[239456]: 2026-01-29 17:24:26.890 239460 DEBUG nova.network.neutron [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Refreshing network info cache for port 2f63240d-7525-40fb-b23f-9ab98ab1f446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:24:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 205 op/s
Jan 29 12:24:28 np0005601226 nova_compute[239456]: 2026-01-29 17:24:28.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:28 np0005601226 nova_compute[239456]: 2026-01-29 17:24:28.629 239460 DEBUG nova.network.neutron [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updated VIF entry in instance network info cache for port 2f63240d-7525-40fb-b23f-9ab98ab1f446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:24:28 np0005601226 nova_compute[239456]: 2026-01-29 17:24:28.630 239460 DEBUG nova.network.neutron [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updating instance_info_cache with network_info: [{"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:24:29 np0005601226 nova_compute[239456]: 2026-01-29 17:24:29.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:29 np0005601226 nova_compute[239456]: 2026-01-29 17:24:29.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:24:29 np0005601226 nova_compute[239456]: 2026-01-29 17:24:29.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:24:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 5.3 MiB/s rd, 22 KiB/s wr, 212 op/s
Jan 29 12:24:30 np0005601226 nova_compute[239456]: 2026-01-29 17:24:30.234 239460 DEBUG oslo_concurrency.lockutils [req-17c0eb6d-055e-4301-b9d7-a6921ef24dfb req-bd309b7c-18e5-4daa-80cf-c841b720a761 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:24:30 np0005601226 nova_compute[239456]: 2026-01-29 17:24:30.328 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:30 np0005601226 nova_compute[239456]: 2026-01-29 17:24:30.473 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:24:30 np0005601226 nova_compute[239456]: 2026-01-29 17:24:30.473 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:24:30 np0005601226 nova_compute[239456]: 2026-01-29 17:24:30.473 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:24:30 np0005601226 nova_compute[239456]: 2026-01-29 17:24:30.474 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 54ae1aee-2aec-49fb-981c-904cceb59a9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688608500' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688608500' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:31 np0005601226 nova_compute[239456]: 2026-01-29 17:24:31.108 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 29 12:24:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 29 12:24:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 29 12:24:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 248 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 22 KiB/s wr, 157 op/s
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.618 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updating instance_info_cache with network_info: [{"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.643 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.643 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.644 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.644 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.644 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.644 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.645 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.669 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.670 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.670 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.670 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:24:32 np0005601226 nova_compute[239456]: 2026-01-29 17:24:32.671 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:24:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250915922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.227 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.298 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.299 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.301 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.302 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.458 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.461 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4134MB free_disk=59.9424331812188GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.461 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.461 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.525 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 54ae1aee-2aec-49fb-981c-904cceb59a9d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.525 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.543 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 68efac02-4b20-467c-9485-cc94a679579b has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.544 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.544 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.611 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.632 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "68efac02-4b20-467c-9485-cc94a679579b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.633 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "68efac02-4b20-467c-9485-cc94a679579b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.649 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:24:33 np0005601226 nova_compute[239456]: 2026-01-29 17:24:33.723 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 264 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 732 KiB/s wr, 41 op/s
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890222578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.157 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.163 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.180 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.210 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.210 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.211 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.218 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.218 239460 INFO nova.compute.claims [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:24:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:34Z|00018|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.6
Jan 29 12:24:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:34Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a6:25:d7 10.100.0.6
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.341 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2234888233' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2234888233' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:24:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695409351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.927 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.932 239460 DEBUG nova.compute.provider_tree [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.950 239460 DEBUG nova.scheduler.client.report [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.977 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:34 np0005601226 nova_compute[239456]: 2026-01-29 17:24:34.978 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.038 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.038 239460 DEBUG nova.network.neutron [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.068 239460 INFO nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.090 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.190 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.191 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.191 239460 INFO nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Creating image(s)#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.209 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.231 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.250 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.252 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.301 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.302 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.303 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.303 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.323 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.328 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 68efac02-4b20-467c-9485-cc94a679579b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.343 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.654 239460 DEBUG nova.network.neutron [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.654 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:24:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 308 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 104 op/s
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.858 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 68efac02-4b20-467c-9485-cc94a679579b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:35 np0005601226 nova_compute[239456]: 2026-01-29 17:24:35.930 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] resizing rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.005 239460 DEBUG nova.objects.instance [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lazy-loading 'migration_context' on Instance uuid 68efac02-4b20-467c-9485-cc94a679579b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.019 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.020 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Ensure instance console log exists: /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.021 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.021 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.021 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.022 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.027 239460 WARNING nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.031 239460 DEBUG nova.virt.libvirt.host [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.031 239460 DEBUG nova.virt.libvirt.host [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.035 239460 DEBUG nova.virt.libvirt.host [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.037 239460 DEBUG nova.virt.libvirt.host [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.037 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.038 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.038 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.039 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.039 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.039 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.039 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.040 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.040 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.040 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.040 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.041 239460 DEBUG nova.virt.hardware [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.043 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.110 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48136997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/48136997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.206 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:24:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454544828' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.623 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.645 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:36 np0005601226 nova_compute[239456]: 2026-01-29 17:24:36.650 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:36 np0005601226 podman[259425]: 2026-01-29 17:24:36.912362377 +0000 UTC m=+0.060938093 container create b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:24:36 np0005601226 systemd[1]: Started libpod-conmon-b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42.scope.
Jan 29 12:24:36 np0005601226 podman[259425]: 2026-01-29 17:24:36.869795783 +0000 UTC m=+0.018371519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:24:36 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:24:36 np0005601226 podman[259425]: 2026-01-29 17:24:36.990474586 +0000 UTC m=+0.139050332 container init b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 12:24:36 np0005601226 podman[259425]: 2026-01-29 17:24:36.995559084 +0000 UTC m=+0.144134790 container start b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 29 12:24:36 np0005601226 eloquent_hopper[259442]: 167 167
Jan 29 12:24:37 np0005601226 podman[259425]: 2026-01-29 17:24:37.000052476 +0000 UTC m=+0.148628202 container attach b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:24:37 np0005601226 systemd[1]: libpod-b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42.scope: Deactivated successfully.
Jan 29 12:24:37 np0005601226 podman[259425]: 2026-01-29 17:24:37.002294947 +0000 UTC m=+0.150870673 container died b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:24:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-54654dd22a02607222737121bc2e96444a104e3afd058a31e2c4bb6f2425306e-merged.mount: Deactivated successfully.
Jan 29 12:24:37 np0005601226 podman[259425]: 2026-01-29 17:24:37.058629015 +0000 UTC m=+0.207204731 container remove b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_hopper, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:24:37 np0005601226 systemd[1]: libpod-conmon-b8cede0a61e77556f16084eedbc1d6e67e2382dfa629041da3bc525bff104a42.scope: Deactivated successfully.
Jan 29 12:24:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:24:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1507912674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.181 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.183 239460 DEBUG nova.objects.instance [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lazy-loading 'pci_devices' on Instance uuid 68efac02-4b20-467c-9485-cc94a679579b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.228 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <uuid>68efac02-4b20-467c-9485-cc94a679579b</uuid>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <name>instance-0000000c</name>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <nova:name>tempest-VolumesNegativeTest-instance-2109895420</nova:name>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:24:36</nova:creationTime>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <nova:user uuid="24b79b4b96fa4530a8a978473e0160d3">tempest-VolumesNegativeTest-1133945381-project-member</nova:user>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <nova:project uuid="abeeb646289346f2add0328ded6d730c">tempest-VolumesNegativeTest-1133945381</nova:project>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <nova:ports/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <entry name="serial">68efac02-4b20-467c-9485-cc94a679579b</entry>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <entry name="uuid">68efac02-4b20-467c-9485-cc94a679579b</entry>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/68efac02-4b20-467c-9485-cc94a679579b_disk">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/68efac02-4b20-467c-9485-cc94a679579b_disk.config">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/console.log" append="off"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:24:37 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:24:37 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:24:37 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:24:37 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:24:37 np0005601226 podman[259468]: 2026-01-29 17:24:37.240346563 +0000 UTC m=+0.100197928 container create 8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 12:24:37 np0005601226 podman[259468]: 2026-01-29 17:24:37.160522569 +0000 UTC m=+0.020373954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:24:37 np0005601226 systemd[1]: Started libpod-conmon-8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161.scope.
Jan 29 12:24:37 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:24:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caabd3692b85d442da470b5a58e95f0bff81069965f421ba3e5f270c50f4bc64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caabd3692b85d442da470b5a58e95f0bff81069965f421ba3e5f270c50f4bc64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caabd3692b85d442da470b5a58e95f0bff81069965f421ba3e5f270c50f4bc64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caabd3692b85d442da470b5a58e95f0bff81069965f421ba3e5f270c50f4bc64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caabd3692b85d442da470b5a58e95f0bff81069965f421ba3e5f270c50f4bc64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:37 np0005601226 podman[259468]: 2026-01-29 17:24:37.314647849 +0000 UTC m=+0.174499214 container init 8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 12:24:37 np0005601226 podman[259468]: 2026-01-29 17:24:37.323536029 +0000 UTC m=+0.183387394 container start 8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 12:24:37 np0005601226 podman[259468]: 2026-01-29 17:24:37.327517597 +0000 UTC m=+0.187368972 container attach 8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.433 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.434 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.434 239460 INFO nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Using config drive#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.456 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:37 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.682 239460 INFO nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Creating config drive at /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/disk.config#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.686 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprpwhqcm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:37 np0005601226 awesome_sanderson[259489]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:24:37 np0005601226 awesome_sanderson[259489]: --> All data devices are unavailable
Jan 29 12:24:37 np0005601226 systemd[1]: libpod-8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161.scope: Deactivated successfully.
Jan 29 12:24:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 308 MiB data, 411 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 98 op/s
Jan 29 12:24:37 np0005601226 podman[259530]: 2026-01-29 17:24:37.776131894 +0000 UTC m=+0.021058842 container died 8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sanderson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 12:24:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-caabd3692b85d442da470b5a58e95f0bff81069965f421ba3e5f270c50f4bc64-merged.mount: Deactivated successfully.
Jan 29 12:24:37 np0005601226 podman[259530]: 2026-01-29 17:24:37.807437324 +0000 UTC m=+0.052364272 container remove 8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.808 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprpwhqcm" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:37 np0005601226 systemd[1]: libpod-conmon-8d45c58465eddec657d16d0538a1332a74e2017b5f0ea37e8edd82279cb2a161.scope: Deactivated successfully.
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.829 239460 DEBUG nova.storage.rbd_utils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] rbd image 68efac02-4b20-467c-9485-cc94a679579b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.832 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/disk.config 68efac02-4b20-467c-9485-cc94a679579b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.960 239460 DEBUG oslo_concurrency.processutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/disk.config 68efac02-4b20-467c-9485-cc94a679579b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:37 np0005601226 nova_compute[239456]: 2026-01-29 17:24:37.961 239460 INFO nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Deleting local config drive /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b/disk.config because it was imported into RBD.#033[00m
Jan 29 12:24:38 np0005601226 systemd-machined[207561]: New machine qemu-12-instance-0000000c.
Jan 29 12:24:38 np0005601226 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Jan 29 12:24:38 np0005601226 podman[259659]: 2026-01-29 17:24:38.202024705 +0000 UTC m=+0.038766921 container create 2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:24:38 np0005601226 systemd[1]: Started libpod-conmon-2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef.scope.
Jan 29 12:24:38 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:24:38 np0005601226 podman[259659]: 2026-01-29 17:24:38.263345099 +0000 UTC m=+0.100087305 container init 2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 29 12:24:38 np0005601226 podman[259659]: 2026-01-29 17:24:38.269696321 +0000 UTC m=+0.106438537 container start 2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banzai, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:24:38 np0005601226 recursing_banzai[259712]: 167 167
Jan 29 12:24:38 np0005601226 systemd[1]: libpod-2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef.scope: Deactivated successfully.
Jan 29 12:24:38 np0005601226 conmon[259712]: conmon 2c16689cf34543bed4a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef.scope/container/memory.events
Jan 29 12:24:38 np0005601226 podman[259659]: 2026-01-29 17:24:38.274565453 +0000 UTC m=+0.111307699 container attach 2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banzai, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:24:38 np0005601226 podman[259659]: 2026-01-29 17:24:38.275062077 +0000 UTC m=+0.111804303 container died 2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:24:38 np0005601226 podman[259659]: 2026-01-29 17:24:38.183413701 +0000 UTC m=+0.020155947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:24:38 np0005601226 systemd[1]: var-lib-containers-storage-overlay-784eeccb9bf8e128cab38b19293b42a145a98b092c248e3a1418b3299f3f3c22-merged.mount: Deactivated successfully.
Jan 29 12:24:38 np0005601226 podman[259659]: 2026-01-29 17:24:38.308357439 +0000 UTC m=+0.145099655 container remove 2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.311 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707478.310724, 68efac02-4b20-467c-9485-cc94a679579b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.313 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.316 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.316 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:24:38 np0005601226 systemd[1]: libpod-conmon-2c16689cf34543bed4a221bf402d5b7fcc09272d04bb9c94b65fb3b5f94237ef.scope: Deactivated successfully.
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.320 239460 INFO nova.virt.libvirt.driver [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Instance spawned successfully.#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.320 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.396 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.400 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:24:38 np0005601226 podman[259742]: 2026-01-29 17:24:38.448992314 +0000 UTC m=+0.054664933 container create d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 29 12:24:38 np0005601226 systemd[1]: Started libpod-conmon-d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462.scope.
Jan 29 12:24:38 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:24:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4f2acb2b6e3b4060ce0e4efe0c27cce9855a5651ff8cbe23881f800043bfa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4f2acb2b6e3b4060ce0e4efe0c27cce9855a5651ff8cbe23881f800043bfa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4f2acb2b6e3b4060ce0e4efe0c27cce9855a5651ff8cbe23881f800043bfa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:38 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c4f2acb2b6e3b4060ce0e4efe0c27cce9855a5651ff8cbe23881f800043bfa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:38 np0005601226 podman[259742]: 2026-01-29 17:24:38.515680392 +0000 UTC m=+0.121353051 container init d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:24:38 np0005601226 podman[259742]: 2026-01-29 17:24:38.522130208 +0000 UTC m=+0.127802837 container start d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:24:38 np0005601226 podman[259742]: 2026-01-29 17:24:38.525874069 +0000 UTC m=+0.131546708 container attach d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:24:38 np0005601226 podman[259742]: 2026-01-29 17:24:38.434367117 +0000 UTC m=+0.040039766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.550 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.551 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.551 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.552 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.552 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.553 239460 DEBUG nova.virt.libvirt.driver [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.629 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.630 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707478.312482, 68efac02-4b20-467c-9485-cc94a679579b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.630 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] VM Started (Lifecycle Event)#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.663 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.667 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.694 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.699 239460 INFO nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Took 3.51 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.700 239460 DEBUG nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.762 239460 INFO nova.compute.manager [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Took 5.06 seconds to build instance.#033[00m
Jan 29 12:24:38 np0005601226 nova_compute[239456]: 2026-01-29 17:24:38.785 239460 DEBUG oslo_concurrency.lockutils [None req-6827fe15-900e-48e1-99f1-524c4c714ec9 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "68efac02-4b20-467c-9485-cc94a679579b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]: {
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:    "0": [
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:        {
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "devices": [
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "/dev/loop3"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            ],
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_name": "ceph_lv0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_size": "21470642176",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "name": "ceph_lv0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "tags": {
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cluster_name": "ceph",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.crush_device_class": "",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.encrypted": "0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.objectstore": "bluestore",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osd_id": "0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.type": "block",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.vdo": "0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.with_tpm": "0"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            },
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "type": "block",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "vg_name": "ceph_vg0"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:        }
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:    ],
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:    "1": [
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:        {
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "devices": [
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "/dev/loop4"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            ],
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_name": "ceph_lv1",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_size": "21470642176",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "name": "ceph_lv1",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "tags": {
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cluster_name": "ceph",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.crush_device_class": "",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.encrypted": "0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.objectstore": "bluestore",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osd_id": "1",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.type": "block",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.vdo": "0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.with_tpm": "0"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            },
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "type": "block",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "vg_name": "ceph_vg1"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:        }
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:    ],
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:    "2": [
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:        {
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "devices": [
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "/dev/loop5"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            ],
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_name": "ceph_lv2",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_size": "21470642176",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "name": "ceph_lv2",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "tags": {
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.cluster_name": "ceph",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.crush_device_class": "",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.encrypted": "0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.objectstore": "bluestore",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osd_id": "2",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.type": "block",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.vdo": "0",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:                "ceph.with_tpm": "0"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            },
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "type": "block",
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:            "vg_name": "ceph_vg2"
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:        }
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]:    ]
Jan 29 12:24:38 np0005601226 quirky_cerf[259758]: }
Jan 29 12:24:38 np0005601226 systemd[1]: libpod-d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462.scope: Deactivated successfully.
Jan 29 12:24:38 np0005601226 podman[259767]: 2026-01-29 17:24:38.879388297 +0000 UTC m=+0.028647817 container died d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:24:38 np0005601226 systemd[1]: var-lib-containers-storage-overlay-5c4f2acb2b6e3b4060ce0e4efe0c27cce9855a5651ff8cbe23881f800043bfa2-merged.mount: Deactivated successfully.
Jan 29 12:24:38 np0005601226 podman[259767]: 2026-01-29 17:24:38.920485612 +0000 UTC m=+0.069745102 container remove d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_cerf, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 12:24:38 np0005601226 systemd[1]: libpod-conmon-d7cb1a8dd9bb47359a34064f1f6f7471a88c1f4797c743bf513b7fae7ebfb462.scope: Deactivated successfully.
Jan 29 12:24:38 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:38Z|00020|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.6
Jan 29 12:24:38 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:38Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a6:25:d7 10.100.0.6
Jan 29 12:24:39 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:39Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:25:d7 10.100.0.6
Jan 29 12:24:39 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:39Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:25:d7 10.100.0.6
Jan 29 12:24:39 np0005601226 podman[259844]: 2026-01-29 17:24:39.342807656 +0000 UTC m=+0.039043340 container create 52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 12:24:39 np0005601226 systemd[1]: Started libpod-conmon-52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb.scope.
Jan 29 12:24:39 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:24:39 np0005601226 podman[259844]: 2026-01-29 17:24:39.325922828 +0000 UTC m=+0.022158522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:24:39 np0005601226 podman[259844]: 2026-01-29 17:24:39.426045124 +0000 UTC m=+0.122280828 container init 52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:24:39 np0005601226 podman[259844]: 2026-01-29 17:24:39.432093498 +0000 UTC m=+0.128329222 container start 52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 12:24:39 np0005601226 keen_solomon[259860]: 167 167
Jan 29 12:24:39 np0005601226 systemd[1]: libpod-52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb.scope: Deactivated successfully.
Jan 29 12:24:39 np0005601226 conmon[259860]: conmon 52e8d4aca0cde16d82cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb.scope/container/memory.events
Jan 29 12:24:39 np0005601226 podman[259844]: 2026-01-29 17:24:39.437178466 +0000 UTC m=+0.133414190 container attach 52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:24:39 np0005601226 podman[259844]: 2026-01-29 17:24:39.437635128 +0000 UTC m=+0.133870842 container died 52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 12:24:39 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0a2b90a3656d7cec36cc883d91c065fe8a60702fcff28338bdd34752a0d669ac-merged.mount: Deactivated successfully.
Jan 29 12:24:39 np0005601226 podman[259844]: 2026-01-29 17:24:39.478923478 +0000 UTC m=+0.175159162 container remove 52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 12:24:39 np0005601226 systemd[1]: libpod-conmon-52e8d4aca0cde16d82cb00eb5fdf99080390e7fb2980dafc477ac858e7bdffdb.scope: Deactivated successfully.
Jan 29 12:24:39 np0005601226 podman[259884]: 2026-01-29 17:24:39.611312009 +0000 UTC m=+0.047864509 container create 09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:24:39 np0005601226 systemd[1]: Started libpod-conmon-09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1.scope.
Jan 29 12:24:39 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:24:39 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e760510ad632bdea290697f51f4881ef4832a56ae1bbd27f063381f7ebf796/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:39 np0005601226 podman[259884]: 2026-01-29 17:24:39.595607413 +0000 UTC m=+0.032159933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:24:39 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e760510ad632bdea290697f51f4881ef4832a56ae1bbd27f063381f7ebf796/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:39 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e760510ad632bdea290697f51f4881ef4832a56ae1bbd27f063381f7ebf796/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:39 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e760510ad632bdea290697f51f4881ef4832a56ae1bbd27f063381f7ebf796/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:24:39 np0005601226 podman[259884]: 2026-01-29 17:24:39.696316624 +0000 UTC m=+0.132869154 container init 09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:24:39 np0005601226 podman[259884]: 2026-01-29 17:24:39.702141932 +0000 UTC m=+0.138694432 container start 09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:24:39 np0005601226 podman[259884]: 2026-01-29 17:24:39.706256564 +0000 UTC m=+0.142809084 container attach 09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:24:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 309 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.9 MiB/s wr, 190 op/s
Jan 29 12:24:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:40.286 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:40.287 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:24:40.287 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:40 np0005601226 lvm[259975]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:24:40 np0005601226 lvm[259975]: VG ceph_vg0 finished
Jan 29 12:24:40 np0005601226 lvm[259978]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:24:40 np0005601226 lvm[259978]: VG ceph_vg1 finished
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.346 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:40 np0005601226 lvm[259979]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:24:40 np0005601226 lvm[259979]: VG ceph_vg2 finished
Jan 29 12:24:40 np0005601226 optimistic_nightingale[259900]: {}
Jan 29 12:24:40 np0005601226 systemd[1]: libpod-09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1.scope: Deactivated successfully.
Jan 29 12:24:40 np0005601226 systemd[1]: libpod-09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1.scope: Consumed 1.111s CPU time.
Jan 29 12:24:40 np0005601226 conmon[259900]: conmon 09fdcafd93a039c9a2af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1.scope/container/memory.events
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.535 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "68efac02-4b20-467c-9485-cc94a679579b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.535 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "68efac02-4b20-467c-9485-cc94a679579b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.535 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "68efac02-4b20-467c-9485-cc94a679579b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.536 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "68efac02-4b20-467c-9485-cc94a679579b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.536 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "68efac02-4b20-467c-9485-cc94a679579b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.537 239460 INFO nova.compute.manager [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Terminating instance#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.538 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "refresh_cache-68efac02-4b20-467c-9485-cc94a679579b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.538 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquired lock "refresh_cache-68efac02-4b20-467c-9485-cc94a679579b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.538 239460 DEBUG nova.network.neutron [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:24:40 np0005601226 podman[259982]: 2026-01-29 17:24:40.543481671 +0000 UTC m=+0.024837075 container died 09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_nightingale, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:24:40
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:24:40 np0005601226 systemd[1]: var-lib-containers-storage-overlay-55e760510ad632bdea290697f51f4881ef4832a56ae1bbd27f063381f7ebf796-merged.mount: Deactivated successfully.
Jan 29 12:24:40 np0005601226 podman[259982]: 2026-01-29 17:24:40.6065113 +0000 UTC m=+0.087866684 container remove 09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_nightingale, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 29 12:24:40 np0005601226 systemd[1]: libpod-conmon-09fdcafd93a039c9a2af2868e13976735e29b2cdb1219fbc6fb540861302bde1.scope: Deactivated successfully.
Jan 29 12:24:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:24:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:24:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:24:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:24:40 np0005601226 nova_compute[239456]: 2026-01-29 17:24:40.729 239460 DEBUG nova.network.neutron [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:24:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.086 239460 DEBUG nova.network.neutron [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.103 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Releasing lock "refresh_cache-68efac02-4b20-467c-9485-cc94a679579b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.103 239460 DEBUG nova.compute.manager [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.112 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:41 np0005601226 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 29 12:24:41 np0005601226 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 3.109s CPU time.
Jan 29 12:24:41 np0005601226 systemd-machined[207561]: Machine qemu-12-instance-0000000c terminated.
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.321 239460 INFO nova.virt.libvirt.driver [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Instance destroyed successfully.#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.321 239460 DEBUG nova.objects.instance [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lazy-loading 'resources' on Instance uuid 68efac02-4b20-467c-9485-cc94a679579b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.593 239460 INFO nova.virt.libvirt.driver [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Deleting instance files /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b_del#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.594 239460 INFO nova.virt.libvirt.driver [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Deletion of /var/lib/nova/instances/68efac02-4b20-467c-9485-cc94a679579b_del complete#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.645 239460 INFO nova.compute.manager [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.646 239460 DEBUG oslo.service.loopingcall [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.646 239460 DEBUG nova.compute.manager [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:24:41 np0005601226 nova_compute[239456]: 2026-01-29 17:24:41.646 239460 DEBUG nova.network.neutron [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:24:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:24:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:24:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 309 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Jan 29 12:24:42 np0005601226 nova_compute[239456]: 2026-01-29 17:24:42.497 239460 DEBUG nova.network.neutron [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:24:42 np0005601226 nova_compute[239456]: 2026-01-29 17:24:42.509 239460 DEBUG nova.network.neutron [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:24:42 np0005601226 nova_compute[239456]: 2026-01-29 17:24:42.524 239460 INFO nova.compute.manager [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Took 0.88 seconds to deallocate network for instance.#033[00m
Jan 29 12:24:42 np0005601226 nova_compute[239456]: 2026-01-29 17:24:42.573 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:42 np0005601226 nova_compute[239456]: 2026-01-29 17:24:42.574 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:42 np0005601226 nova_compute[239456]: 2026-01-29 17:24:42.657 239460 DEBUG oslo_concurrency.processutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:24:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398675785' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:24:43 np0005601226 nova_compute[239456]: 2026-01-29 17:24:43.187 239460 DEBUG oslo_concurrency.processutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:43 np0005601226 nova_compute[239456]: 2026-01-29 17:24:43.194 239460 DEBUG nova.compute.provider_tree [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:24:43 np0005601226 nova_compute[239456]: 2026-01-29 17:24:43.210 239460 DEBUG nova.scheduler.client.report [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:24:43 np0005601226 nova_compute[239456]: 2026-01-29 17:24:43.232 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:43 np0005601226 nova_compute[239456]: 2026-01-29 17:24:43.263 239460 INFO nova.scheduler.client.report [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Deleted allocations for instance 68efac02-4b20-467c-9485-cc94a679579b#033[00m
Jan 29 12:24:43 np0005601226 nova_compute[239456]: 2026-01-29 17:24:43.318 239460 DEBUG oslo_concurrency.lockutils [None req-6dd86bd4-e23c-45bf-9bd4-9fadf6776862 24b79b4b96fa4530a8a978473e0160d3 abeeb646289346f2add0328ded6d730c - - default default] Lock "68efac02-4b20-467c-9485-cc94a679579b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:43 np0005601226 nova_compute[239456]: 2026-01-29 17:24:43.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:24:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 298 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.0 MiB/s wr, 183 op/s
Jan 29 12:24:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 29 12:24:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 29 12:24:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 29 12:24:45 np0005601226 nova_compute[239456]: 2026-01-29 17:24:45.348 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Jan 29 12:24:46 np0005601226 nova_compute[239456]: 2026-01-29 17:24:46.115 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 29 12:24:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 29 12:24:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 29 12:24:46 np0005601226 podman[260065]: 2026-01-29 17:24:46.904888807 +0000 UTC m=+0.071521951 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:24:46 np0005601226 podman[260064]: 2026-01-29 17:24:46.904902988 +0000 UTC m=+0.071410108 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 29 12:24:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 71 KiB/s wr, 139 op/s
Jan 29 12:24:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 29 12:24:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 29 12:24:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 29 12:24:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 111 KiB/s wr, 223 op/s
Jan 29 12:24:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 29 12:24:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 29 12:24:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 29 12:24:50 np0005601226 nova_compute[239456]: 2026-01-29 17:24:50.351 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:51 np0005601226 nova_compute[239456]: 2026-01-29 17:24:51.117 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 29 12:24:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 29 12:24:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008663996406898467 of space, bias 1.0, pg target 0.259919892206954 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00038920143133899836 of space, bias 1.0, pg target 0.1167604294016995 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 9.134333216135659e-07 of space, bias 1.0, pg target 0.00027402999648406975 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249631077266572 of space, bias 1.0, pg target 0.4274889323179972 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4354639735827031e-06 of space, bias 4.0, pg target 0.0017225567682992438 quantized to 16 (current 16)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:24:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 20 KiB/s wr, 102 op/s
Jan 29 12:24:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 18 KiB/s wr, 101 op/s
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/54756184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/54756184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369831186' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369831186' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572080216' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2572080216' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:55 np0005601226 nova_compute[239456]: 2026-01-29 17:24:55.353 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 17 KiB/s wr, 136 op/s
Jan 29 12:24:56 np0005601226 nova_compute[239456]: 2026-01-29 17:24:56.118 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:24:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:24:56Z|00120|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 29 12:24:56 np0005601226 nova_compute[239456]: 2026-01-29 17:24:56.320 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707481.319285, 68efac02-4b20-467c-9485-cc94a679579b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:24:56 np0005601226 nova_compute[239456]: 2026-01-29 17:24:56.321 239460 INFO nova.compute.manager [-] [instance: 68efac02-4b20-467c-9485-cc94a679579b] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1100968279' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1100968279' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:24:56 np0005601226 nova_compute[239456]: 2026-01-29 17:24:56.340 239460 DEBUG nova.compute.manager [None req-f2c2c2ce-61d5-4f83-bf46-a7c0e5ee3c1a - - - - - -] [instance: 68efac02-4b20-467c-9485-cc94a679579b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 29 12:24:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 29 12:24:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 4.0 KiB/s wr, 70 op/s
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.389 239460 DEBUG oslo_concurrency.lockutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.389 239460 DEBUG oslo_concurrency.lockutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.403 239460 DEBUG nova.objects.instance [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'flavor' on Instance uuid 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.438 239460 DEBUG oslo_concurrency.lockutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.617 239460 DEBUG oslo_concurrency.lockutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.618 239460 DEBUG oslo_concurrency.lockutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.618 239460 INFO nova.compute.manager [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Attaching volume 7aa97a15-9d09-45ff-92a5-789ab3bffa7c to /dev/vdb#033[00m
Jan 29 12:24:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 4.6 KiB/s wr, 107 op/s
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.843 239460 DEBUG os_brick.utils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.845 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.853 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.853 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe3c461-fae4-4a88-8fe1-bed23c403d36]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.855 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.862 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.863 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[950efe00-8db4-423d-8fd9-ed5afb88d5d5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.864 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.872 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.872 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[fe288ce3-9d82-45c8-9c50-8ce01ffaf2f6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.874 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc392f5-f7bd-4960-a9bf-69c6b1c781f9]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.875 239460 DEBUG oslo_concurrency.processutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.892 239460 DEBUG oslo_concurrency.processutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.894 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.894 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.894 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.894 239460 DEBUG os_brick.utils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] <== get_connector_properties: return (50ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:24:59 np0005601226 nova_compute[239456]: 2026-01-29 17:24:59.895 239460 DEBUG nova.virt.block_device [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updating existing volume attachment record: b294f214-dbc9-444a-9d99-db375978d841 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:25:00 np0005601226 nova_compute[239456]: 2026-01-29 17:25:00.356 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1390822805' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:00 np0005601226 nova_compute[239456]: 2026-01-29 17:25:00.823 239460 DEBUG nova.objects.instance [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'flavor' on Instance uuid 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:25:00 np0005601226 nova_compute[239456]: 2026-01-29 17:25:00.847 239460 DEBUG nova.virt.libvirt.driver [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Attempting to attach volume 7aa97a15-9d09-45ff-92a5-789ab3bffa7c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:25:00 np0005601226 nova_compute[239456]: 2026-01-29 17:25:00.849 239460 DEBUG nova.virt.libvirt.guest [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:25:00 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:25:00 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-7aa97a15-9d09-45ff-92a5-789ab3bffa7c">
Jan 29 12:25:00 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:25:00 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:25:00 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:25:00 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:25:00 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:25:00 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:25:00 np0005601226 nova_compute[239456]:  <serial>7aa97a15-9d09-45ff-92a5-789ab3bffa7c</serial>
Jan 29 12:25:00 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:25:00 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:25:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/687798961' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:01 np0005601226 nova_compute[239456]: 2026-01-29 17:25:01.119 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:01 np0005601226 nova_compute[239456]: 2026-01-29 17:25:01.143 239460 DEBUG nova.virt.libvirt.driver [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:25:01 np0005601226 nova_compute[239456]: 2026-01-29 17:25:01.144 239460 DEBUG nova.virt.libvirt.driver [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:25:01 np0005601226 nova_compute[239456]: 2026-01-29 17:25:01.144 239460 DEBUG nova.virt.libvirt.driver [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:25:01 np0005601226 nova_compute[239456]: 2026-01-29 17:25:01.144 239460 DEBUG nova.virt.libvirt.driver [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] No VIF found with MAC fa:16:3e:a6:25:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:25:01 np0005601226 nova_compute[239456]: 2026-01-29 17:25:01.334 239460 DEBUG oslo_concurrency.lockutils [None req-b2ebf85a-67fc-4768-a27f-a54a75a98d03 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 266 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 3.8 KiB/s wr, 89 op/s
Jan 29 12:25:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 29 12:25:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 29 12:25:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 29 12:25:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1959127962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 29 12:25:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 29 12:25:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.107 239460 DEBUG oslo_concurrency.lockutils [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.108 239460 DEBUG oslo_concurrency.lockutils [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.130 239460 INFO nova.compute.manager [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Detaching volume 7aa97a15-9d09-45ff-92a5-789ab3bffa7c#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.261 239460 INFO nova.virt.block_device [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Attempting to driver detach volume 7aa97a15-9d09-45ff-92a5-789ab3bffa7c from mountpoint /dev/vdb#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.273 239460 DEBUG nova.virt.libvirt.driver [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Attempting to detach device vdb from instance 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.274 239460 DEBUG nova.virt.libvirt.guest [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-7aa97a15-9d09-45ff-92a5-789ab3bffa7c">
Jan 29 12:25:03 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <serial>7aa97a15-9d09-45ff-92a5-789ab3bffa7c</serial>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:25:03 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.284 239460 INFO nova.virt.libvirt.driver [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully detached device vdb from instance 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 from the persistent domain config.#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.285 239460 DEBUG nova.virt.libvirt.driver [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.285 239460 DEBUG nova.virt.libvirt.guest [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-7aa97a15-9d09-45ff-92a5-789ab3bffa7c">
Jan 29 12:25:03 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <serial>7aa97a15-9d09-45ff-92a5-789ab3bffa7c</serial>
Jan 29 12:25:03 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:25:03 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:25:03 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.397 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769707503.3973153, 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.398 239460 DEBUG nova.virt.libvirt.driver [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.401 239460 INFO nova.virt.libvirt.driver [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully detached device vdb from instance 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 from the live domain config.#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.531 239460 DEBUG nova.objects.instance [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'flavor' on Instance uuid 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:25:03 np0005601226 nova_compute[239456]: 2026-01-29 17:25:03.571 239460 DEBUG oslo_concurrency.lockutils [None req-fc364833-de7c-443d-afe2-f346c6ad783c 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 268 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 282 KiB/s wr, 60 op/s
Jan 29 12:25:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 29 12:25:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 29 12:25:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 29 12:25:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:04.981 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:25:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:04.982 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:25:04 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:04.983 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:25:04 np0005601226 nova_compute[239456]: 2026-01-29 17:25:04.984 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.228 239460 DEBUG nova.compute.manager [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-changed-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.228 239460 DEBUG nova.compute.manager [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Refreshing instance network info cache due to event network-changed-2f63240d-7525-40fb-b23f-9ab98ab1f446. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.229 239460 DEBUG oslo_concurrency.lockutils [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.229 239460 DEBUG oslo_concurrency.lockutils [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.229 239460 DEBUG nova.network.neutron [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Refreshing network info cache for port 2f63240d-7525-40fb-b23f-9ab98ab1f446 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.294 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.295 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.295 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.295 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.296 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.297 239460 INFO nova.compute.manager [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Terminating instance#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.297 239460 DEBUG nova.compute.manager [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:25:05 np0005601226 kernel: tap2f63240d-75 (unregistering): left promiscuous mode
Jan 29 12:25:05 np0005601226 NetworkManager[49020]: <info>  [1769707505.3424] device (tap2f63240d-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:25:05 np0005601226 ovn_controller[145556]: 2026-01-29T17:25:05Z|00121|binding|INFO|Releasing lport 2f63240d-7525-40fb-b23f-9ab98ab1f446 from this chassis (sb_readonly=0)
Jan 29 12:25:05 np0005601226 ovn_controller[145556]: 2026-01-29T17:25:05Z|00122|binding|INFO|Setting lport 2f63240d-7525-40fb-b23f-9ab98ab1f446 down in Southbound
Jan 29 12:25:05 np0005601226 ovn_controller[145556]: 2026-01-29T17:25:05Z|00123|binding|INFO|Removing iface tap2f63240d-75 ovn-installed in OVS
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.349 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.355 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:25:d7 10.100.0.6'], port_security=['fa:16:3e:a6:25:d7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0ac4b31b-2f69-4c16-997b-57dc53aa29b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2a1daea29d845c4b1c58f0e6610e767', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58fc09dd-a146-490e-a131-265322bed80e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627df87c-0fcf-4d89-b573-9b0d1cecf486, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=2f63240d-7525-40fb-b23f-9ab98ab1f446) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.356 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 2f63240d-7525-40fb-b23f-9ab98ab1f446 in datapath 3c884cc1-e1d2-418b-8bb8-bae78dab7018 unbound from our chassis#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.357 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.359 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c884cc1-e1d2-418b-8bb8-bae78dab7018#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.376 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[30748952-6f52-48cf-9270-d1cdef25c455]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:05 np0005601226 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 29 12:25:05 np0005601226 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 14.685s CPU time.
Jan 29 12:25:05 np0005601226 systemd-machined[207561]: Machine qemu-11-instance-0000000b terminated.
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.408 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8a8d0d-e2df-4bc4-aa2d-d8a5180f6ca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.412 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[442ce71b-4954-42d5-9353-2556e53a9234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.439 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab4cc7e-e380-4275-9de4-1b819c89f7a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.457 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8d412d6b-fc01-42e1-a4d5-34fe7067e624]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c884cc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:2e:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477058, 'reachable_time': 17652, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260145, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.474 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[abd758cd-4723-4074-b67b-69fb717befce]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c884cc1-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477065, 'tstamp': 477065}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260146, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c884cc1-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477067, 'tstamp': 477067}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260146, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.476 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c884cc1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.478 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.483 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.484 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c884cc1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.485 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.485 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c884cc1-e0, col_values=(('external_ids', {'iface-id': '0442c862-051a-4100-a371-ef7e19ea6eba'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:25:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:05.486 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.537 239460 INFO nova.virt.libvirt.driver [-] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Instance destroyed successfully.#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.537 239460 DEBUG nova.objects.instance [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'resources' on Instance uuid 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.549 239460 DEBUG nova.virt.libvirt.vif [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:24:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1118559770',display_name='tempest-TestStampPattern-server-1118559770',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1118559770',id=11,image_ref='6c19a175-0f51-4960-b93b-bdb33e6773d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCntQFYGg1tN9Lkltvq06uP6PbTSdiUSw2rpV4DVMQfDXGCpCCbqNspsVT5fc2Gf5/3l4zc3WW9mGuuTy6awOxbpJd54hg8vvKJT9WsymmM3odJoG0L/624VsKwRCgcRrg==',key_name='tempest-TestStampPattern-1877159583',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:24:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f2a1daea29d845c4b1c58f0e6610e767',ramdisk_id='',reservation_id='r-rwib1oa3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='54ae1aee-2aec-49fb-981c-904cceb59a9d',image_min_disk='1',image_min_ram='0',image_owner_id='f2a1daea29d845c4b1c58f0e6610e767',image_owner_project_name='tempest-TestStampPattern-907219493',image_owner_user_name='tempest-TestStampPattern-907219493-project-member',image_user_id='66a034221acf4c559a731fcc84a54c53',owner_project_name='tempest-TestStampPattern-907219493',owner_user_name='tempest-TestStampPattern-907219493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:24:22Z,user_data=None,user_id='66a034221acf4c559a731fcc84a54c53',uuid=0ac4b31b-2f69-4c16-997b-57dc53aa29b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.549 239460 DEBUG nova.network.os_vif_util [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converting VIF {"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.550 239460 DEBUG nova.network.os_vif_util [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:25:d7,bridge_name='br-int',has_traffic_filtering=True,id=2f63240d-7525-40fb-b23f-9ab98ab1f446,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f63240d-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.550 239460 DEBUG os_vif [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:25:d7,bridge_name='br-int',has_traffic_filtering=True,id=2f63240d-7525-40fb-b23f-9ab98ab1f446,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f63240d-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.552 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.552 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f63240d-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.555 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.557 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.559 239460 INFO os_vif [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:25:d7,bridge_name='br-int',has_traffic_filtering=True,id=2f63240d-7525-40fb-b23f-9ab98ab1f446,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f63240d-75')#033[00m
Jan 29 12:25:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 268 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 819 KiB/s rd, 349 KiB/s wr, 94 op/s
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.893 239460 INFO nova.virt.libvirt.driver [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Deleting instance files /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2_del#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.894 239460 INFO nova.virt.libvirt.driver [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Deletion of /var/lib/nova/instances/0ac4b31b-2f69-4c16-997b-57dc53aa29b2_del complete#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.946 239460 INFO nova.compute.manager [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.947 239460 DEBUG oslo.service.loopingcall [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.947 239460 DEBUG nova.compute.manager [-] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:25:05 np0005601226 nova_compute[239456]: 2026-01-29 17:25:05.947 239460 DEBUG nova.network.neutron [-] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.167 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.453 239460 DEBUG nova.network.neutron [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updated VIF entry in instance network info cache for port 2f63240d-7525-40fb-b23f-9ab98ab1f446. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.454 239460 DEBUG nova.network.neutron [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updating instance_info_cache with network_info: [{"id": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "address": "fa:16:3e:a6:25:d7", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f63240d-75", "ovs_interfaceid": "2f63240d-7525-40fb-b23f-9ab98ab1f446", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.481 239460 DEBUG oslo_concurrency.lockutils [req-dad254ad-57e8-4535-9cc2-ed6bee83b124 req-283e9391-bc63-46a4-bdee-8afeb4e5e88a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-0ac4b31b-2f69-4c16-997b-57dc53aa29b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.747 239460 DEBUG nova.network.neutron [-] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.763 239460 INFO nova.compute.manager [-] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Took 0.82 seconds to deallocate network for instance.#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.807 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.807 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:06 np0005601226 nova_compute[239456]: 2026-01-29 17:25:06.888 239460 DEBUG oslo_concurrency.processutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:25:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 29 12:25:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 29 12:25:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 29 12:25:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3549132218' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.299 239460 DEBUG nova.compute.manager [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-vif-unplugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.300 239460 DEBUG oslo_concurrency.lockutils [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.300 239460 DEBUG oslo_concurrency.lockutils [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.300 239460 DEBUG oslo_concurrency.lockutils [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.301 239460 DEBUG nova.compute.manager [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] No waiting events found dispatching network-vif-unplugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.301 239460 WARNING nova.compute.manager [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received unexpected event network-vif-unplugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.301 239460 DEBUG nova.compute.manager [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.301 239460 DEBUG oslo_concurrency.lockutils [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.301 239460 DEBUG oslo_concurrency.lockutils [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.302 239460 DEBUG oslo_concurrency.lockutils [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.302 239460 DEBUG nova.compute.manager [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] No waiting events found dispatching network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.302 239460 WARNING nova.compute.manager [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received unexpected event network-vif-plugged-2f63240d-7525-40fb-b23f-9ab98ab1f446 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.302 239460 DEBUG nova.compute.manager [req-ddb7b2e0-29e2-435b-ae2d-e1a4e39788fe req-c78ad1f6-97ca-482b-9901-767588f17c52 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Received event network-vif-deleted-2f63240d-7525-40fb-b23f-9ab98ab1f446 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:25:07 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312415796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.470 239460 DEBUG oslo_concurrency.processutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.475 239460 DEBUG nova.compute.provider_tree [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.499 239460 DEBUG nova.scheduler.client.report [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.538 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.567 239460 INFO nova.scheduler.client.report [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Deleted allocations for instance 0ac4b31b-2f69-4c16-997b-57dc53aa29b2#033[00m
Jan 29 12:25:07 np0005601226 nova_compute[239456]: 2026-01-29 17:25:07.645 239460 DEBUG oslo_concurrency.lockutils [None req-6d30ffaf-c095-4ea5-9215-6d3ce88337bf 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "0ac4b31b-2f69-4c16-997b-57dc53aa29b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 268 MiB data, 397 MiB used, 60 GiB / 60 GiB avail; 855 KiB/s rd, 364 KiB/s wr, 98 op/s
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528984152' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528984152' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836443950' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836443950' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 29 12:25:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 29 12:25:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 29 12:25:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640078035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/640078035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 250 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 749 KiB/s rd, 413 KiB/s wr, 299 op/s
Jan 29 12:25:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 29 12:25:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 29 12:25:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 29 12:25:10 np0005601226 nova_compute[239456]: 2026-01-29 17:25:10.554 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:25:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:25:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:25:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:25:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:25:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:25:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3749702145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3749702145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:11 np0005601226 nova_compute[239456]: 2026-01-29 17:25:11.169 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 29 12:25:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 29 12:25:11 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 29 12:25:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 29 12:25:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 29 12:25:11 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 29 12:25:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 250 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 908 KiB/s rd, 648 KiB/s wr, 412 op/s
Jan 29 12:25:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716786583' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716786583' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/591646953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/591646953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.203 239460 DEBUG nova.compute.manager [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-changed-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.203 239460 DEBUG nova.compute.manager [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Refreshing instance network info cache due to event network-changed-dd0e38fb-6c55-46b2-944f-3b2cf8f87929. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.204 239460 DEBUG oslo_concurrency.lockutils [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.204 239460 DEBUG oslo_concurrency.lockutils [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.204 239460 DEBUG nova.network.neutron [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Refreshing network info cache for port dd0e38fb-6c55-46b2-944f-3b2cf8f87929 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.279 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.279 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.280 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.280 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.280 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.281 239460 INFO nova.compute.manager [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Terminating instance#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.282 239460 DEBUG nova.compute.manager [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:25:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1205298256' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:13 np0005601226 kernel: tapdd0e38fb-6c (unregistering): left promiscuous mode
Jan 29 12:25:13 np0005601226 NetworkManager[49020]: <info>  [1769707513.3789] device (tapdd0e38fb-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.383 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:25:13Z|00124|binding|INFO|Releasing lport dd0e38fb-6c55-46b2-944f-3b2cf8f87929 from this chassis (sb_readonly=0)
Jan 29 12:25:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:25:13Z|00125|binding|INFO|Setting lport dd0e38fb-6c55-46b2-944f-3b2cf8f87929 down in Southbound
Jan 29 12:25:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:25:13Z|00126|binding|INFO|Removing iface tapdd0e38fb-6c ovn-installed in OVS
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.385 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.391 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.394 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:48:d2 10.100.0.12'], port_security=['fa:16:3e:c3:48:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '54ae1aee-2aec-49fb-981c-904cceb59a9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2a1daea29d845c4b1c58f0e6610e767', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58fc09dd-a146-490e-a131-265322bed80e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=627df87c-0fcf-4d89-b573-9b0d1cecf486, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=dd0e38fb-6c55-46b2-944f-3b2cf8f87929) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.395 155625 INFO neutron.agent.ovn.metadata.agent [-] Port dd0e38fb-6c55-46b2-944f-3b2cf8f87929 in datapath 3c884cc1-e1d2-418b-8bb8-bae78dab7018 unbound from our chassis#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.396 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c884cc1-e1d2-418b-8bb8-bae78dab7018, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.397 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[50bc3539-b810-47b0-a64b-eab160d00485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.398 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018 namespace which is not needed anymore#033[00m
Jan 29 12:25:13 np0005601226 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 29 12:25:13 np0005601226 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 15.435s CPU time.
Jan 29 12:25:13 np0005601226 systemd-machined[207561]: Machine qemu-10-instance-0000000a terminated.
Jan 29 12:25:13 np0005601226 neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018[258174]: [NOTICE]   (258180) : haproxy version is 2.8.14-c23fe91
Jan 29 12:25:13 np0005601226 neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018[258174]: [NOTICE]   (258180) : path to executable is /usr/sbin/haproxy
Jan 29 12:25:13 np0005601226 neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018[258174]: [WARNING]  (258180) : Exiting Master process...
Jan 29 12:25:13 np0005601226 neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018[258174]: [ALERT]    (258180) : Current worker (258182) exited with code 143 (Terminated)
Jan 29 12:25:13 np0005601226 neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018[258174]: [WARNING]  (258180) : All workers exited. Exiting... (0)
Jan 29 12:25:13 np0005601226 systemd[1]: libpod-97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4.scope: Deactivated successfully.
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.508 239460 INFO nova.virt.libvirt.driver [-] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Instance destroyed successfully.#033[00m
Jan 29 12:25:13 np0005601226 podman[260224]: 2026-01-29 17:25:13.51278601 +0000 UTC m=+0.047158094 container died 97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.513 239460 DEBUG nova.objects.instance [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lazy-loading 'resources' on Instance uuid 54ae1aee-2aec-49fb-981c-904cceb59a9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.569 239460 DEBUG nova.virt.libvirt.vif [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:23:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1377494089',display_name='tempest-TestStampPattern-server-1377494089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1377494089',id=10,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCntQFYGg1tN9Lkltvq06uP6PbTSdiUSw2rpV4DVMQfDXGCpCCbqNspsVT5fc2Gf5/3l4zc3WW9mGuuTy6awOxbpJd54hg8vvKJT9WsymmM3odJoG0L/624VsKwRCgcRrg==',key_name='tempest-TestStampPattern-1877159583',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:23:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f2a1daea29d845c4b1c58f0e6610e767',ramdisk_id='',reservation_id='r-l08m7xok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-907219493',owner_user_name='tempest-TestStampPattern-907219493-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:24:10Z,user_data=None,user_id='66a034221acf4c559a731fcc84a54c53',uuid=54ae1aee-2aec-49fb-981c-904cceb59a9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.569 239460 DEBUG nova.network.os_vif_util [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converting VIF {"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.570 239460 DEBUG nova.network.os_vif_util [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:48:d2,bridge_name='br-int',has_traffic_filtering=True,id=dd0e38fb-6c55-46b2-944f-3b2cf8f87929,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd0e38fb-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.570 239460 DEBUG os_vif [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:48:d2,bridge_name='br-int',has_traffic_filtering=True,id=dd0e38fb-6c55-46b2-944f-3b2cf8f87929,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd0e38fb-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.572 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.572 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd0e38fb-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.574 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.575 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.577 239460 INFO os_vif [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:48:d2,bridge_name='br-int',has_traffic_filtering=True,id=dd0e38fb-6c55-46b2-944f-3b2cf8f87929,network=Network(3c884cc1-e1d2-418b-8bb8-bae78dab7018),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd0e38fb-6c')#033[00m
Jan 29 12:25:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4-userdata-shm.mount: Deactivated successfully.
Jan 29 12:25:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3b1dfeb348e53290b247eb3503bebcb768db27b328c79781e9d5eabbffaae789-merged.mount: Deactivated successfully.
Jan 29 12:25:13 np0005601226 podman[260224]: 2026-01-29 17:25:13.666181194 +0000 UTC m=+0.200553278 container cleanup 97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 29 12:25:13 np0005601226 podman[260280]: 2026-01-29 17:25:13.78039205 +0000 UTC m=+0.097484790 container remove 97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:25:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 212 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 9.5 KiB/s wr, 168 op/s
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.783 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c3617539-f3d7-41cd-82b9-f1194e3853d9]: (4, ('Thu Jan 29 05:25:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018 (97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4)\n97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4\nThu Jan 29 05:25:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018 (97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4)\n97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.785 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc2e205-61b4-46cb-b259-ea039a5eb886]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.785 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c884cc1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:25:13 np0005601226 kernel: tap3c884cc1-e0: left promiscuous mode
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.787 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 systemd[1]: libpod-conmon-97acf73e463552784eb7baeb7f0ecdf92bfa78f767192a16d8a8d15e05fa7fb4.scope: Deactivated successfully.
Jan 29 12:25:13 np0005601226 nova_compute[239456]: 2026-01-29 17:25:13.792 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.795 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfe5ea8-0bb6-4e1e-b485-6db0543178c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.809 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ab1868-a395-4310-8d82-a8849a2df9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.811 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[373126e6-5836-416e-9aea-99f36289c5fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.822 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe2ea9e-2c53-4a5b-a92f-5596b3cad3f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477052, 'reachable_time': 43793, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260295, 'error': None, 'target': 'ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:13 np0005601226 systemd[1]: run-netns-ovnmeta\x2d3c884cc1\x2de1d2\x2d418b\x2d8bb8\x2dbae78dab7018.mount: Deactivated successfully.
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.825 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c884cc1-e1d2-418b-8bb8-bae78dab7018 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:25:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:13.825 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[b36bee7f-2fec-4296-bd40-6d5d308091de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.202 239460 INFO nova.virt.libvirt.driver [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Deleting instance files /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d_del#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.202 239460 INFO nova.virt.libvirt.driver [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Deletion of /var/lib/nova/instances/54ae1aee-2aec-49fb-981c-904cceb59a9d_del complete#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.471 239460 INFO nova.compute.manager [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.472 239460 DEBUG oslo.service.loopingcall [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.472 239460 DEBUG nova.compute.manager [-] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.473 239460 DEBUG nova.network.neutron [-] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:25:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 29 12:25:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 29 12:25:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.753 239460 DEBUG nova.network.neutron [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updated VIF entry in instance network info cache for port dd0e38fb-6c55-46b2-944f-3b2cf8f87929. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.753 239460 DEBUG nova.network.neutron [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updating instance_info_cache with network_info: [{"id": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "address": "fa:16:3e:c3:48:d2", "network": {"id": "3c884cc1-e1d2-418b-8bb8-bae78dab7018", "bridge": "br-int", "label": "tempest-TestStampPattern-2089746871-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f2a1daea29d845c4b1c58f0e6610e767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd0e38fb-6c", "ovs_interfaceid": "dd0e38fb-6c55-46b2-944f-3b2cf8f87929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:25:14 np0005601226 nova_compute[239456]: 2026-01-29 17:25:14.778 239460 DEBUG oslo_concurrency.lockutils [req-878e9454-7f4e-42dc-9a08-77f1eba817fe req-3882ca47-7918-4832-8c12-140b9f119659 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-54ae1aee-2aec-49fb-981c-904cceb59a9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.236 239460 DEBUG nova.network.neutron [-] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.254 239460 INFO nova.compute.manager [-] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Took 0.78 seconds to deallocate network for instance.#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.292 239460 DEBUG nova.compute.manager [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-vif-unplugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.292 239460 DEBUG oslo_concurrency.lockutils [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.292 239460 DEBUG oslo_concurrency.lockutils [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.293 239460 DEBUG oslo_concurrency.lockutils [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.293 239460 DEBUG nova.compute.manager [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] No waiting events found dispatching network-vif-unplugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.293 239460 DEBUG nova.compute.manager [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-vif-unplugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.293 239460 DEBUG nova.compute.manager [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.294 239460 DEBUG oslo_concurrency.lockutils [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.294 239460 DEBUG oslo_concurrency.lockutils [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.294 239460 DEBUG oslo_concurrency.lockutils [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.294 239460 DEBUG nova.compute.manager [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] No waiting events found dispatching network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.294 239460 WARNING nova.compute.manager [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received unexpected event network-vif-plugged-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.295 239460 DEBUG nova.compute.manager [req-55acbb18-cbc7-48e8-9917-0824966bac06 req-7c4ff8fb-b5f8-4d30-bad0-198aa38501f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Received event network-vif-deleted-dd0e38fb-6c55-46b2-944f-3b2cf8f87929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.302 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.302 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.354 239460 DEBUG oslo_concurrency.processutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 29 12:25:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 151 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 11 KiB/s wr, 311 op/s
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584645270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.897 239460 DEBUG oslo_concurrency.processutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.901 239460 DEBUG nova.compute.provider_tree [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3991616824' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3991616824' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.919 239460 DEBUG nova.scheduler.client.report [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.940 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:15 np0005601226 nova_compute[239456]: 2026-01-29 17:25:15.974 239460 INFO nova.scheduler.client.report [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Deleted allocations for instance 54ae1aee-2aec-49fb-981c-904cceb59a9d#033[00m
Jan 29 12:25:16 np0005601226 nova_compute[239456]: 2026-01-29 17:25:16.053 239460 DEBUG oslo_concurrency.lockutils [None req-9f099b67-5549-4e68-96de-1feddca47b41 66a034221acf4c559a731fcc84a54c53 f2a1daea29d845c4b1c58f0e6610e767 - - default default] Lock "54ae1aee-2aec-49fb-981c-904cceb59a9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:16 np0005601226 nova_compute[239456]: 2026-01-29 17:25:16.170 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/581156934' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1295999236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1295999236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 29 12:25:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 29 12:25:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 29 12:25:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 151 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 8.2 KiB/s wr, 238 op/s
Jan 29 12:25:17 np0005601226 podman[260321]: 2026-01-29 17:25:17.879883059 +0000 UTC m=+0.048755716 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 12:25:17 np0005601226 podman[260322]: 2026-01-29 17:25:17.953242612 +0000 UTC m=+0.123616199 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 29 12:25:18 np0005601226 nova_compute[239456]: 2026-01-29 17:25:18.575 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 29 12:25:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 29 12:25:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 29 12:25:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 113 KiB/s rd, 9.2 KiB/s wr, 160 op/s
Jan 29 12:25:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2002516305' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:20 np0005601226 nova_compute[239456]: 2026-01-29 17:25:20.533 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707505.5320454, 0ac4b31b-2f69-4c16-997b-57dc53aa29b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:25:20 np0005601226 nova_compute[239456]: 2026-01-29 17:25:20.533 239460 INFO nova.compute.manager [-] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:25:20 np0005601226 nova_compute[239456]: 2026-01-29 17:25:20.566 239460 DEBUG nova.compute.manager [None req-a6361e88-e590-4db2-9880-d6c40b65cd7b - - - - - -] [instance: 0ac4b31b-2f69-4c16-997b-57dc53aa29b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:25:20 np0005601226 nova_compute[239456]: 2026-01-29 17:25:20.943 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:21 np0005601226 nova_compute[239456]: 2026-01-29 17:25:21.010 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:21 np0005601226 nova_compute[239456]: 2026-01-29 17:25:21.172 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 29 12:25:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 29 12:25:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 29 12:25:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 8.0 KiB/s wr, 139 op/s
Jan 29 12:25:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 29 12:25:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 29 12:25:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 29 12:25:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4041102906' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4041102906' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:23 np0005601226 nova_compute[239456]: 2026-01-29 17:25:23.576 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 11 KiB/s wr, 220 op/s
Jan 29 12:25:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/798225657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/798225657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1026597681' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 29 12:25:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 29 12:25:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 29 12:25:24 np0005601226 nova_compute[239456]: 2026-01-29 17:25:24.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2326735014' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2326735014' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 29 12:25:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 29 12:25:25 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 29 12:25:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 8.7 KiB/s wr, 145 op/s
Jan 29 12:25:26 np0005601226 nova_compute[239456]: 2026-01-29 17:25:26.173 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059460132' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3059460132' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 29 12:25:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 29 12:25:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 29 12:25:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.5 KiB/s wr, 36 op/s
Jan 29 12:25:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3489115857' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3489115857' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:28 np0005601226 nova_compute[239456]: 2026-01-29 17:25:28.506 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707513.5055974, 54ae1aee-2aec-49fb-981c-904cceb59a9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:25:28 np0005601226 nova_compute[239456]: 2026-01-29 17:25:28.507 239460 INFO nova.compute.manager [-] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:25:28 np0005601226 nova_compute[239456]: 2026-01-29 17:25:28.536 239460 DEBUG nova.compute.manager [None req-a5f1fa01-2837-4241-9cd7-d8f9ac103e32 - - - - - -] [instance: 54ae1aee-2aec-49fb-981c-904cceb59a9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:25:28 np0005601226 nova_compute[239456]: 2026-01-29 17:25:28.579 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4235882254' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4235882254' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 29 12:25:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 29 12:25:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.628 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:25:29 np0005601226 nova_compute[239456]: 2026-01-29 17:25:29.629 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:25:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 208 KiB/s rd, 7.2 KiB/s wr, 279 op/s
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336185616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.139 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.268 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.269 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4558MB free_disk=59.98821992892772GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.269 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.269 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.323 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.323 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.340 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3536290450' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3536290450' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:25:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/203818863' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.893 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.897 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.912 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.944 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:25:30 np0005601226 nova_compute[239456]: 2026-01-29 17:25:30.944 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:31 np0005601226 nova_compute[239456]: 2026-01-29 17:25:31.175 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 29 12:25:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 29 12:25:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 29 12:25:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 5.9 KiB/s wr, 227 op/s
Jan 29 12:25:31 np0005601226 nova_compute[239456]: 2026-01-29 17:25:31.944 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:31 np0005601226 nova_compute[239456]: 2026-01-29 17:25:31.945 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:31 np0005601226 nova_compute[239456]: 2026-01-29 17:25:31.945 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:25:31 np0005601226 nova_compute[239456]: 2026-01-29 17:25:31.962 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:25:31 np0005601226 nova_compute[239456]: 2026-01-29 17:25:31.963 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:32 np0005601226 nova_compute[239456]: 2026-01-29 17:25:32.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:33 np0005601226 nova_compute[239456]: 2026-01-29 17:25:33.581 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Jan 29 12:25:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Jan 29 12:25:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Jan 29 12:25:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 5.8 KiB/s wr, 245 op/s
Jan 29 12:25:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Jan 29 12:25:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Jan 29 12:25:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Jan 29 12:25:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Jan 29 12:25:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Jan 29 12:25:35 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Jan 29 12:25:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.2 KiB/s wr, 111 op/s
Jan 29 12:25:36 np0005601226 nova_compute[239456]: 2026-01-29 17:25:36.177 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Jan 29 12:25:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Jan 29 12:25:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Jan 29 12:25:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527382548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3527382548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:37 np0005601226 nova_compute[239456]: 2026-01-29 17:25:37.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Jan 29 12:25:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Jan 29 12:25:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Jan 29 12:25:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Jan 29 12:25:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/175497223' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/175497223' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:38 np0005601226 nova_compute[239456]: 2026-01-29 17:25:38.585 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Jan 29 12:25:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Jan 29 12:25:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Jan 29 12:25:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 5.5 KiB/s wr, 136 op/s
Jan 29 12:25:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3329702546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3329702546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:40.286 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:25:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:40.287 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:25:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:25:40.287 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:25:40
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', 'volumes', 'vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups']
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:25:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:25:41 np0005601226 nova_compute[239456]: 2026-01-29 17:25:41.179 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Jan 29 12:25:41 np0005601226 podman[260559]: 2026-01-29 17:25:41.616870989 +0000 UTC m=+0.038111310 container create a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dhawan, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:25:41 np0005601226 systemd[1]: Started libpod-conmon-a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453.scope.
Jan 29 12:25:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:25:41 np0005601226 podman[260559]: 2026-01-29 17:25:41.688324981 +0000 UTC m=+0.109565322 container init a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:25:41 np0005601226 podman[260559]: 2026-01-29 17:25:41.693369477 +0000 UTC m=+0.114609788 container start a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 12:25:41 np0005601226 podman[260559]: 2026-01-29 17:25:41.597866061 +0000 UTC m=+0.019106392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:25:41 np0005601226 podman[260559]: 2026-01-29 17:25:41.697046295 +0000 UTC m=+0.118286606 container attach a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:25:41 np0005601226 eloquent_dhawan[260575]: 167 167
Jan 29 12:25:41 np0005601226 systemd[1]: libpod-a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453.scope: Deactivated successfully.
Jan 29 12:25:41 np0005601226 podman[260559]: 2026-01-29 17:25:41.698914705 +0000 UTC m=+0.120155026 container died a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:25:41 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:25:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay-32d14fdc9f754325b0a97b5c71d557b63d08425ad24fb7983d942165d82506c8-merged.mount: Deactivated successfully.
Jan 29 12:25:41 np0005601226 podman[260559]: 2026-01-29 17:25:41.746236372 +0000 UTC m=+0.167476683 container remove a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dhawan, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:25:41 np0005601226 systemd[1]: libpod-conmon-a3731c5410977dd927bcb7966285bf999dfa0f744fa3666376c3a114cdf5a453.scope: Deactivated successfully.
Jan 29 12:25:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.5 KiB/s wr, 110 op/s
Jan 29 12:25:41 np0005601226 podman[260599]: 2026-01-29 17:25:41.884402268 +0000 UTC m=+0.038298875 container create 4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 12:25:41 np0005601226 systemd[1]: Started libpod-conmon-4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0.scope.
Jan 29 12:25:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:25:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb37e6f6a911b03778563e77f37e4addfee624510669c3c9aa6472d7e3cf801/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb37e6f6a911b03778563e77f37e4addfee624510669c3c9aa6472d7e3cf801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb37e6f6a911b03778563e77f37e4addfee624510669c3c9aa6472d7e3cf801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb37e6f6a911b03778563e77f37e4addfee624510669c3c9aa6472d7e3cf801/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb37e6f6a911b03778563e77f37e4addfee624510669c3c9aa6472d7e3cf801/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:41 np0005601226 podman[260599]: 2026-01-29 17:25:41.865978475 +0000 UTC m=+0.019875122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:25:41 np0005601226 podman[260599]: 2026-01-29 17:25:41.971742386 +0000 UTC m=+0.125639073 container init 4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 12:25:41 np0005601226 podman[260599]: 2026-01-29 17:25:41.979232126 +0000 UTC m=+0.133128723 container start 4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:25:41 np0005601226 podman[260599]: 2026-01-29 17:25:41.9827757 +0000 UTC m=+0.136672317 container attach 4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bhaskara, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 29 12:25:42 np0005601226 friendly_bhaskara[260616]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:25:42 np0005601226 friendly_bhaskara[260616]: --> All data devices are unavailable
Jan 29 12:25:42 np0005601226 systemd[1]: libpod-4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0.scope: Deactivated successfully.
Jan 29 12:25:42 np0005601226 podman[260636]: 2026-01-29 17:25:42.397762446 +0000 UTC m=+0.021658322 container died 4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bhaskara, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:25:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay-5eb37e6f6a911b03778563e77f37e4addfee624510669c3c9aa6472d7e3cf801-merged.mount: Deactivated successfully.
Jan 29 12:25:42 np0005601226 podman[260636]: 2026-01-29 17:25:42.43456792 +0000 UTC m=+0.058463766 container remove 4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_bhaskara, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:25:42 np0005601226 systemd[1]: libpod-conmon-4fc9d7561c80491ed4cb353994cf9d6afece3366f747970169a1c87e5e6a2af0.scope: Deactivated successfully.
Jan 29 12:25:42 np0005601226 podman[260712]: 2026-01-29 17:25:42.913387043 +0000 UTC m=+0.039419746 container create 21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:25:42 np0005601226 systemd[1]: Started libpod-conmon-21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0.scope.
Jan 29 12:25:42 np0005601226 podman[260712]: 2026-01-29 17:25:42.896919842 +0000 UTC m=+0.022952555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:25:42 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:25:43 np0005601226 podman[260712]: 2026-01-29 17:25:43.012000163 +0000 UTC m=+0.138032876 container init 21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:25:43 np0005601226 podman[260712]: 2026-01-29 17:25:43.017621003 +0000 UTC m=+0.143653716 container start 21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:25:43 np0005601226 podman[260712]: 2026-01-29 17:25:43.022340139 +0000 UTC m=+0.148372842 container attach 21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:25:43 np0005601226 sleepy_rhodes[260729]: 167 167
Jan 29 12:25:43 np0005601226 systemd[1]: libpod-21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0.scope: Deactivated successfully.
Jan 29 12:25:43 np0005601226 conmon[260729]: conmon 21aff4b6abfce0ea5ae0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0.scope/container/memory.events
Jan 29 12:25:43 np0005601226 podman[260712]: 2026-01-29 17:25:43.02610518 +0000 UTC m=+0.152137883 container died 21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 12:25:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-82459cb51345e03092fb12bd7f1288eaa2a3d740dd701d622ab1c0991a087e27-merged.mount: Deactivated successfully.
Jan 29 12:25:43 np0005601226 podman[260712]: 2026-01-29 17:25:43.066999764 +0000 UTC m=+0.193032487 container remove 21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:25:43 np0005601226 systemd[1]: libpod-conmon-21aff4b6abfce0ea5ae0d499ab75a90f0b03967659cf4c8cadd6fdda19460df0.scope: Deactivated successfully.
Jan 29 12:25:43 np0005601226 podman[260753]: 2026-01-29 17:25:43.219168845 +0000 UTC m=+0.039779765 container create 0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:25:43 np0005601226 systemd[1]: Started libpod-conmon-0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f.scope.
Jan 29 12:25:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:25:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cbf09107e66dba0ee460f83d3d75bf4803a3dd884a98756549a448afd488a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cbf09107e66dba0ee460f83d3d75bf4803a3dd884a98756549a448afd488a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cbf09107e66dba0ee460f83d3d75bf4803a3dd884a98756549a448afd488a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71cbf09107e66dba0ee460f83d3d75bf4803a3dd884a98756549a448afd488a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:43 np0005601226 podman[260753]: 2026-01-29 17:25:43.289456337 +0000 UTC m=+0.110067287 container init 0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:25:43 np0005601226 podman[260753]: 2026-01-29 17:25:43.295045856 +0000 UTC m=+0.115656816 container start 0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:25:43 np0005601226 podman[260753]: 2026-01-29 17:25:43.299175637 +0000 UTC m=+0.119786607 container attach 0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 12:25:43 np0005601226 podman[260753]: 2026-01-29 17:25:43.202820938 +0000 UTC m=+0.023431868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]: {
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:    "0": [
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:        {
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "devices": [
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "/dev/loop3"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            ],
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_name": "ceph_lv0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_size": "21470642176",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "name": "ceph_lv0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "tags": {
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cluster_name": "ceph",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.crush_device_class": "",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.encrypted": "0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.objectstore": "bluestore",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osd_id": "0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.type": "block",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.vdo": "0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.with_tpm": "0"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            },
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "type": "block",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "vg_name": "ceph_vg0"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:        }
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:    ],
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:    "1": [
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:        {
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "devices": [
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "/dev/loop4"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            ],
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_name": "ceph_lv1",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_size": "21470642176",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "name": "ceph_lv1",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "tags": {
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cluster_name": "ceph",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.crush_device_class": "",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.encrypted": "0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.objectstore": "bluestore",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osd_id": "1",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.type": "block",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.vdo": "0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.with_tpm": "0"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            },
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "type": "block",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "vg_name": "ceph_vg1"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:        }
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:    ],
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:    "2": [
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:        {
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "devices": [
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "/dev/loop5"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            ],
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_name": "ceph_lv2",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_size": "21470642176",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "name": "ceph_lv2",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "tags": {
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.cluster_name": "ceph",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.crush_device_class": "",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.encrypted": "0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.objectstore": "bluestore",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osd_id": "2",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.type": "block",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.vdo": "0",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:                "ceph.with_tpm": "0"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            },
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "type": "block",
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:            "vg_name": "ceph_vg2"
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:        }
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]:    ]
Jan 29 12:25:43 np0005601226 crazy_davinci[260769]: }
Jan 29 12:25:43 np0005601226 systemd[1]: libpod-0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f.scope: Deactivated successfully.
Jan 29 12:25:43 np0005601226 podman[260753]: 2026-01-29 17:25:43.568047442 +0000 UTC m=+0.388658372 container died 0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 29 12:25:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-71cbf09107e66dba0ee460f83d3d75bf4803a3dd884a98756549a448afd488a6-merged.mount: Deactivated successfully.
Jan 29 12:25:43 np0005601226 nova_compute[239456]: 2026-01-29 17:25:43.588 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:43 np0005601226 podman[260753]: 2026-01-29 17:25:43.604715493 +0000 UTC m=+0.425326413 container remove 0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 12:25:43 np0005601226 nova_compute[239456]: 2026-01-29 17:25:43.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:25:43 np0005601226 systemd[1]: libpod-conmon-0dde7364cb5fd99ea3c3d282e9285748ba16850823d492d8e2b9d39717d6701f.scope: Deactivated successfully.
Jan 29 12:25:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 4.9 KiB/s wr, 122 op/s
Jan 29 12:25:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3234373371' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:44 np0005601226 podman[260851]: 2026-01-29 17:25:43.942114641 +0000 UTC m=+0.017635603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:25:44 np0005601226 podman[260851]: 2026-01-29 17:25:44.039807086 +0000 UTC m=+0.115328028 container create 6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:25:44 np0005601226 systemd[1]: Started libpod-conmon-6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e.scope.
Jan 29 12:25:44 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:25:44 np0005601226 podman[260851]: 2026-01-29 17:25:44.103297004 +0000 UTC m=+0.178817976 container init 6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 29 12:25:44 np0005601226 podman[260851]: 2026-01-29 17:25:44.107351293 +0000 UTC m=+0.182872245 container start 6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ptolemy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 12:25:44 np0005601226 podman[260851]: 2026-01-29 17:25:44.110272701 +0000 UTC m=+0.185793653 container attach 6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:25:44 np0005601226 systemd[1]: libpod-6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e.scope: Deactivated successfully.
Jan 29 12:25:44 np0005601226 optimistic_ptolemy[260867]: 167 167
Jan 29 12:25:44 np0005601226 conmon[260867]: conmon 6d328e90e29778fbea85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e.scope/container/memory.events
Jan 29 12:25:44 np0005601226 podman[260851]: 2026-01-29 17:25:44.112250523 +0000 UTC m=+0.187771485 container died 6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 29 12:25:44 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9f653ad667c2aef73036fcacdee84ce59acb632c61b818e2f421e73006b0fbec-merged.mount: Deactivated successfully.
Jan 29 12:25:44 np0005601226 podman[260851]: 2026-01-29 17:25:44.144376773 +0000 UTC m=+0.219897715 container remove 6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 12:25:44 np0005601226 systemd[1]: libpod-conmon-6d328e90e29778fbea85772e01990d8c2e36078cf9e4c7b6874cdb9fa75fe68e.scope: Deactivated successfully.
Jan 29 12:25:44 np0005601226 podman[260892]: 2026-01-29 17:25:44.252300091 +0000 UTC m=+0.033128727 container create 0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 29 12:25:44 np0005601226 systemd[1]: Started libpod-conmon-0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8.scope.
Jan 29 12:25:44 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:25:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b1ba1facdd939d685bc113ce730e8d883a94c44c0ef32c9441a251cd896dfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b1ba1facdd939d685bc113ce730e8d883a94c44c0ef32c9441a251cd896dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b1ba1facdd939d685bc113ce730e8d883a94c44c0ef32c9441a251cd896dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:44 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7b1ba1facdd939d685bc113ce730e8d883a94c44c0ef32c9441a251cd896dfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:25:44 np0005601226 podman[260892]: 2026-01-29 17:25:44.323682951 +0000 UTC m=+0.104511607 container init 0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:25:44 np0005601226 podman[260892]: 2026-01-29 17:25:44.330118774 +0000 UTC m=+0.110947410 container start 0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:25:44 np0005601226 podman[260892]: 2026-01-29 17:25:44.333055132 +0000 UTC m=+0.113883788 container attach 0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:25:44 np0005601226 podman[260892]: 2026-01-29 17:25:44.237609808 +0000 UTC m=+0.018438444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:25:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Jan 29 12:25:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Jan 29 12:25:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Jan 29 12:25:44 np0005601226 lvm[260985]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:25:44 np0005601226 lvm[260987]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:25:44 np0005601226 lvm[260985]: VG ceph_vg0 finished
Jan 29 12:25:44 np0005601226 lvm[260987]: VG ceph_vg1 finished
Jan 29 12:25:44 np0005601226 lvm[260989]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:25:44 np0005601226 lvm[260989]: VG ceph_vg2 finished
Jan 29 12:25:44 np0005601226 lvm[260990]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:25:44 np0005601226 lvm[260990]: VG ceph_vg1 finished
Jan 29 12:25:44 np0005601226 intelligent_lichterman[260908]: {}
Jan 29 12:25:45 np0005601226 systemd[1]: libpod-0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8.scope: Deactivated successfully.
Jan 29 12:25:45 np0005601226 podman[260892]: 2026-01-29 17:25:45.01126612 +0000 UTC m=+0.792094766 container died 0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:25:45 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d7b1ba1facdd939d685bc113ce730e8d883a94c44c0ef32c9441a251cd896dfa-merged.mount: Deactivated successfully.
Jan 29 12:25:45 np0005601226 podman[260892]: 2026-01-29 17:25:45.044986874 +0000 UTC m=+0.825815510 container remove 0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lichterman, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 29 12:25:45 np0005601226 systemd[1]: libpod-conmon-0d829dfb8d5dd0b1d381277cdcdd72308721291d25bd17a19419d598ce449bc8.scope: Deactivated successfully.
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:25:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 31 op/s
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:25:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:25:46 np0005601226 nova_compute[239456]: 2026-01-29 17:25:46.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Jan 29 12:25:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Jan 29 12:25:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Jan 29 12:25:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301679795' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 88 MiB data, 294 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 KiB/s wr, 31 op/s
Jan 29 12:25:48 np0005601226 nova_compute[239456]: 2026-01-29 17:25:48.594 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:48 np0005601226 podman[261030]: 2026-01-29 17:25:48.896800046 +0000 UTC m=+0.065439313 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:25:48 np0005601226 podman[261031]: 2026-01-29 17:25:48.915792353 +0000 UTC m=+0.084432950 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 29 12:25:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 228 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 23 MiB/s wr, 176 op/s
Jan 29 12:25:50 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 29 12:25:51 np0005601226 nova_compute[239456]: 2026-01-29 17:25:51.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.945888713288448e-07 of space, bias 1.0, pg target 0.00023837666139865345 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002701572478574383 of space, bias 1.0, pg target 0.8104717435723149 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.300768421568302e-06 of space, bias 1.0, pg target 0.00039023052647049056 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669110879850254 of space, bias 1.0, pg target 0.20007332639550762 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5048300093595984e-06 of space, bias 4.0, pg target 0.001805796011231518 quantized to 16 (current 16)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:25:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:25:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/312183949' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:25:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:25:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/312183949' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:25:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 228 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 20 MiB/s wr, 150 op/s
Jan 29 12:25:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Jan 29 12:25:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Jan 29 12:25:53 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Jan 29 12:25:53 np0005601226 nova_compute[239456]: 2026-01-29 17:25:53.596 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 546 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 57 MiB/s wr, 231 op/s
Jan 29 12:25:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 750 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 68 MiB/s wr, 320 op/s
Jan 29 12:25:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:25:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2061425838' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:25:56 np0005601226 nova_compute[239456]: 2026-01-29 17:25:56.184 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Jan 29 12:25:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Jan 29 12:25:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Jan 29 12:25:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Jan 29 12:25:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Jan 29 12:25:57 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Jan 29 12:25:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 750 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 83 MiB/s wr, 324 op/s
Jan 29 12:25:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Jan 29 12:25:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Jan 29 12:25:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Jan 29 12:25:58 np0005601226 nova_compute[239456]: 2026-01-29 17:25:58.600 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:25:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 103 MiB/s wr, 427 op/s
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1742912469' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:01 np0005601226 nova_compute[239456]: 2026-01-29 17:26:01.185 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:01 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:01Z|00127|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3798630685' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3798630685' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2375515375' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:01 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 29 12:26:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 82 MiB/s wr, 268 op/s
Jan 29 12:26:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Jan 29 12:26:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Jan 29 12:26:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Jan 29 12:26:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2310932429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2310932429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Jan 29 12:26:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Jan 29 12:26:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Jan 29 12:26:03 np0005601226 nova_compute[239456]: 2026-01-29 17:26:03.603 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 63 MiB/s wr, 366 op/s
Jan 29 12:26:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/201794465' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/201794465' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4054768881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:05 np0005601226 nova_compute[239456]: 2026-01-29 17:26:05.079 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:05.079 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:26:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:05.082 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:26:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568769147' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3568769147' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Jan 29 12:26:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Jan 29 12:26:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Jan 29 12:26:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.9 MiB/s wr, 258 op/s
Jan 29 12:26:06 np0005601226 nova_compute[239456]: 2026-01-29 17:26:06.187 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/917935102' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/917935102' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 213 op/s
Jan 29 12:26:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:08 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399060786' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.592 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.592 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.605 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.608 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.687 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.688 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.697 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.698 239460 INFO nova.compute.claims [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:26:08 np0005601226 nova_compute[239456]: 2026-01-29 17:26:08.786 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:26:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330674116' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.375 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.380 239460 DEBUG nova.compute.provider_tree [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.406 239460 DEBUG nova.scheduler.client.report [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.453 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.454 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.503 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.503 239460 DEBUG nova.network.neutron [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.534 239460 INFO nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.553 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.603 239460 INFO nova.virt.block_device [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Booting with volume de711d4d-cfb7-46d0-afd0-694943824c7d at /dev/vda#033[00m
Jan 29 12:26:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.881 239460 DEBUG os_brick.utils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.882 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.891 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.891 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7287b6-d22a-4c14-a696-28144a37be7c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.892 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.911 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.911 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[3a2acf1a-ba1d-4f99-ad78-328892f44ceb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.912 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.918 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.918 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9371db-697f-46fb-adc7-4ea6b9650ba6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.919 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[e462aad7-6dee-46c3-8f09-a6f6ea4e63c6]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.920 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.935 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.937 239460 DEBUG os_brick.initiator.connectors.lightos [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.937 239460 DEBUG os_brick.initiator.connectors.lightos [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.937 239460 DEBUG os_brick.initiator.connectors.lightos [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.937 239460 DEBUG os_brick.utils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:26:09 np0005601226 nova_compute[239456]: 2026-01-29 17:26:09.938 239460 DEBUG nova.virt.block_device [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating existing volume attachment record: 2103f653-12d4-429b-a623-c1672306f4a1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:26:10 np0005601226 nova_compute[239456]: 2026-01-29 17:26:10.059 239460 DEBUG nova.policy [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '676e0657fd9a487a9e331a099119fe7e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f85466673ef54aafa261596930188fc6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:26:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:26:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:26:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:26:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:26:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:26:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:26:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2475578270' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.189 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.200 239460 DEBUG nova.network.neutron [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Successfully created port: 11505d82-9174-4f2c-b0fa-040405d852e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.234 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.235 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.235 239460 INFO nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Creating image(s)#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.236 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.236 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Ensure instance console log exists: /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.236 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.236 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.236 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 1.3 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 5.8 MiB/s wr, 184 op/s
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.828 239460 DEBUG nova.network.neutron [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Successfully updated port: 11505d82-9174-4f2c-b0fa-040405d852e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.842 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.842 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquired lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.842 239460 DEBUG nova.network.neutron [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.912 239460 DEBUG nova.compute.manager [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.912 239460 DEBUG nova.compute.manager [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing instance network info cache due to event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:26:11 np0005601226 nova_compute[239456]: 2026-01-29 17:26:11.913 239460 DEBUG oslo_concurrency.lockutils [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:12.084 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Jan 29 12:26:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Jan 29 12:26:12 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Jan 29 12:26:12 np0005601226 nova_compute[239456]: 2026-01-29 17:26:12.729 239460 DEBUG nova.network.neutron [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:26:13 np0005601226 nova_compute[239456]: 2026-01-29 17:26:13.612 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 1.5 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 104 KiB/s rd, 41 MiB/s wr, 153 op/s
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.182 239460 DEBUG nova.network.neutron [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.200 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Releasing lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.201 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Instance network_info: |[{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.201 239460 DEBUG oslo_concurrency.lockutils [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.201 239460 DEBUG nova.network.neutron [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.207 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Start _get_guest_xml network_info=[{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '2103f653-12d4-429b-a623-c1672306f4a1', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-de711d4d-cfb7-46d0-afd0-694943824c7d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'de711d4d-cfb7-46d0-afd0-694943824c7d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'da7dfea4-c6b4-4092-833b-3fcb8168ecce', 'attached_at': '', 'detached_at': '', 'volume_id': 'de711d4d-cfb7-46d0-afd0-694943824c7d', 'serial': 'de711d4d-cfb7-46d0-afd0-694943824c7d'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.212 239460 WARNING nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.216 239460 DEBUG nova.virt.libvirt.host [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.216 239460 DEBUG nova.virt.libvirt.host [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.220 239460 DEBUG nova.virt.libvirt.host [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.221 239460 DEBUG nova.virt.libvirt.host [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.221 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.221 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.222 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.222 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.222 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.222 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.223 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.223 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.223 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.223 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.223 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.224 239460 DEBUG nova.virt.hardware [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.247 239460 DEBUG nova.storage.rbd_utils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] rbd image da7dfea4-c6b4-4092-833b-3fcb8168ecce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:14 np0005601226 nova_compute[239456]: 2026-01-29 17:26:14.251 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.071 239460 DEBUG nova.network.neutron [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updated VIF entry in instance network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.072 239460 DEBUG nova.network.neutron [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.106 239460 DEBUG oslo_concurrency.lockutils [req-79664495-b0c6-48b9-9bcd-745b34e13b1b req-9676a4a3-18fe-485c-9734-a54d0689cecd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410409649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.220 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.969s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.241 239460 DEBUG nova.virt.libvirt.vif [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1463029357',display_name='tempest-TestVolumeBackupRestore-server-1463029357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1463029357',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN655sdUjJyRlyTIjOVqSzCiiC3hRdOQulXLN544fJ+Fu8Qe4J50LAroKbasRmPK104qzQhOmAn9IPWg4P5yk1aDqwYqb7hvQfPaewjT4XMjIiibFm1fpI7EnV8FxiaYYw==',key_name='tempest-TestVolumeBackupRestore-1300787818',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f85466673ef54aafa261596930188fc6',ramdisk_id='',reservation_id='r-mdsa05g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1783911643',owner_user_name='tempest-TestVolumeBackupRestore-1783911643-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:26:09Z,user_data=None,user_id='676e0657fd9a487a9e331a099119fe7e',uuid=da7dfea4-c6b4-4092-833b-3fcb8168ecce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.241 239460 DEBUG nova.network.os_vif_util [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Converting VIF {"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.242 239460 DEBUG nova.network.os_vif_util [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:8c:8d,bridge_name='br-int',has_traffic_filtering=True,id=11505d82-9174-4f2c-b0fa-040405d852e3,network=Network(eb14e0fb-539d-4adf-a363-7578d5d74818),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11505d82-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.243 239460 DEBUG nova.objects.instance [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lazy-loading 'pci_devices' on Instance uuid da7dfea4-c6b4-4092-833b-3fcb8168ecce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.262 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <uuid>da7dfea4-c6b4-4092-833b-3fcb8168ecce</uuid>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <name>instance-0000000d</name>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBackupRestore-server-1463029357</nova:name>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:26:14</nova:creationTime>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:user uuid="676e0657fd9a487a9e331a099119fe7e">tempest-TestVolumeBackupRestore-1783911643-project-member</nova:user>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:project uuid="f85466673ef54aafa261596930188fc6">tempest-TestVolumeBackupRestore-1783911643</nova:project>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <nova:port uuid="11505d82-9174-4f2c-b0fa-040405d852e3">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <entry name="serial">da7dfea4-c6b4-4092-833b-3fcb8168ecce</entry>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <entry name="uuid">da7dfea4-c6b4-4092-833b-3fcb8168ecce</entry>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/da7dfea4-c6b4-4092-833b-3fcb8168ecce_disk.config">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-de711d4d-cfb7-46d0-afd0-694943824c7d">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <serial>de711d4d-cfb7-46d0-afd0-694943824c7d</serial>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:6a:8c:8d"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <target dev="tap11505d82-91"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/console.log" append="off"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:26:15 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:26:15 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:26:15 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:26:15 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.264 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Preparing to wait for external event network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.264 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.264 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.264 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.265 239460 DEBUG nova.virt.libvirt.vif [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1463029357',display_name='tempest-TestVolumeBackupRestore-server-1463029357',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1463029357',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN655sdUjJyRlyTIjOVqSzCiiC3hRdOQulXLN544fJ+Fu8Qe4J50LAroKbasRmPK104qzQhOmAn9IPWg4P5yk1aDqwYqb7hvQfPaewjT4XMjIiibFm1fpI7EnV8FxiaYYw==',key_name='tempest-TestVolumeBackupRestore-1300787818',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f85466673ef54aafa261596930188fc6',ramdisk_id='',reservation_id='r-mdsa05g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1783911643',owner_user_name='tempest-TestVolumeBackupRestore-1783911643-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:26:09Z,user_data=None,user_id='676e0657fd9a487a9e331a099119fe7e',uuid=da7dfea4-c6b4-4092-833b-3fcb8168ecce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.265 239460 DEBUG nova.network.os_vif_util [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Converting VIF {"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.266 239460 DEBUG nova.network.os_vif_util [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:8c:8d,bridge_name='br-int',has_traffic_filtering=True,id=11505d82-9174-4f2c-b0fa-040405d852e3,network=Network(eb14e0fb-539d-4adf-a363-7578d5d74818),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11505d82-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.266 239460 DEBUG os_vif [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:8c:8d,bridge_name='br-int',has_traffic_filtering=True,id=11505d82-9174-4f2c-b0fa-040405d852e3,network=Network(eb14e0fb-539d-4adf-a363-7578d5d74818),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11505d82-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.267 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.267 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.268 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.271 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.271 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11505d82-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.272 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11505d82-91, col_values=(('external_ids', {'iface-id': '11505d82-9174-4f2c-b0fa-040405d852e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:8c:8d', 'vm-uuid': 'da7dfea4-c6b4-4092-833b-3fcb8168ecce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.273 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:15 np0005601226 NetworkManager[49020]: <info>  [1769707575.2756] manager: (tap11505d82-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.276 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.279 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.280 239460 INFO os_vif [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:8c:8d,bridge_name='br-int',has_traffic_filtering=True,id=11505d82-9174-4f2c-b0fa-040405d852e3,network=Network(eb14e0fb-539d-4adf-a363-7578d5d74818),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11505d82-91')#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.378 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.378 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.378 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] No VIF found with MAC fa:16:3e:6a:8c:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.379 239460 INFO nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Using config drive#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.395 239460 DEBUG nova.storage.rbd_utils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] rbd image da7dfea4-c6b4-4092-833b-3fcb8168ecce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 1.7 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 169 KiB/s rd, 55 MiB/s wr, 257 op/s
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.916 239460 INFO nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Creating config drive at /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/disk.config#033[00m
Jan 29 12:26:15 np0005601226 nova_compute[239456]: 2026-01-29 17:26:15.919 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvw6jm4af execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.037 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvw6jm4af" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.059 239460 DEBUG nova.storage.rbd_utils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] rbd image da7dfea4-c6b4-4092-833b-3fcb8168ecce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.062 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/disk.config da7dfea4-c6b4-4092-833b-3fcb8168ecce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.191 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.455 239460 DEBUG oslo_concurrency.processutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/disk.config da7dfea4-c6b4-4092-833b-3fcb8168ecce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.456 239460 INFO nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Deleting local config drive /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce/disk.config because it was imported into RBD.#033[00m
Jan 29 12:26:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:16 np0005601226 kernel: tap11505d82-91: entered promiscuous mode
Jan 29 12:26:16 np0005601226 NetworkManager[49020]: <info>  [1769707576.5021] manager: (tap11505d82-91): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.502 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:16Z|00128|binding|INFO|Claiming lport 11505d82-9174-4f2c-b0fa-040405d852e3 for this chassis.
Jan 29 12:26:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:16Z|00129|binding|INFO|11505d82-9174-4f2c-b0fa-040405d852e3: Claiming fa:16:3e:6a:8c:8d 10.100.0.4
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.505 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.508 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.514 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:8c:8d 10.100.0.4'], port_security=['fa:16:3e:6a:8c:8d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'da7dfea4-c6b4-4092-833b-3fcb8168ecce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb14e0fb-539d-4adf-a363-7578d5d74818', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f85466673ef54aafa261596930188fc6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae0cd238-8789-4f4d-a3b0-01aadc71310b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb488830-2e71-4458-ae31-77620b55e59f, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=11505d82-9174-4f2c-b0fa-040405d852e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.515 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 11505d82-9174-4f2c-b0fa-040405d852e3 in datapath eb14e0fb-539d-4adf-a363-7578d5d74818 bound to our chassis#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.517 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb14e0fb-539d-4adf-a363-7578d5d74818#033[00m
Jan 29 12:26:16 np0005601226 systemd-machined[207561]: New machine qemu-13-instance-0000000d.
Jan 29 12:26:16 np0005601226 systemd-udevd[261223]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.526 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f83e37f4-040d-4a21-b108-e30094f78b63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.527 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb14e0fb-51 in ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.528 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb14e0fb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.528 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0d76b6d4-50f4-416c-870b-bd4964a5e805]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.529 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7e19447e-e744-4a3f-9b56-1b568acdd16d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 NetworkManager[49020]: <info>  [1769707576.5340] device (tap11505d82-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:26:16 np0005601226 NetworkManager[49020]: <info>  [1769707576.5344] device (tap11505d82-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:26:16 np0005601226 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.539 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.538 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[d53e298d-0b07-4a19-a9a7-adc971ee81ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:16Z|00130|binding|INFO|Setting lport 11505d82-9174-4f2c-b0fa-040405d852e3 ovn-installed in OVS
Jan 29 12:26:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:16Z|00131|binding|INFO|Setting lport 11505d82-9174-4f2c-b0fa-040405d852e3 up in Southbound
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.543 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.549 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[effe4644-8e24-45de-bbe2-67c7c698cac3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.571 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[a667985e-e964-4a2f-ba02-3f54ceff8d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 NetworkManager[49020]: <info>  [1769707576.5772] manager: (tapeb14e0fb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.577 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[357fb931-0737-4675-9cc0-9deed0292b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.599 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[cfeb2605-5244-4ba4-8c97-eb0ddf762a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.601 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[11e3b557-7385-41d8-abab-5dd3ee6fc03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 NetworkManager[49020]: <info>  [1769707576.6165] device (tapeb14e0fb-50): carrier: link connected
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.620 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[d69a6251-0a5a-4c56-9524-e3939be147e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.634 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[711ba021-dee3-44d2-935c-178b172e23c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb14e0fb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:4c:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493012, 'reachable_time': 41632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261255, 'error': None, 'target': 'ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.646 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[55b20958-39b7-4c93-b541-1e8cc295932e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:4c47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493011, 'tstamp': 493011}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261256, 'error': None, 'target': 'ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.657 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[01307ff5-e769-445b-a32d-e1ea0d5fcfdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb14e0fb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:4c:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493012, 'reachable_time': 41632, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261257, 'error': None, 'target': 'ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.678 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c81d220b-8a09-4c28-865f-7bfcf41618e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.717 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3c089e-c3ba-43f6-894e-387a08f20b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.718 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb14e0fb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.719 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.719 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb14e0fb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.720 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 NetworkManager[49020]: <info>  [1769707576.7216] manager: (tapeb14e0fb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 29 12:26:16 np0005601226 kernel: tapeb14e0fb-50: entered promiscuous mode
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.722 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.727 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb14e0fb-50, col_values=(('external_ids', {'iface-id': 'e21e4231-1978-459b-9dea-17ba78b26f45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.728 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:16Z|00132|binding|INFO|Releasing lport e21e4231-1978-459b-9dea-17ba78b26f45 from this chassis (sb_readonly=0)
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.734 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.735 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb14e0fb-539d-4adf-a363-7578d5d74818.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb14e0fb-539d-4adf-a363-7578d5d74818.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.736 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[89a4884d-c78a-412d-bb2c-5cb0a6e76919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.737 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-eb14e0fb-539d-4adf-a363-7578d5d74818
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/eb14e0fb-539d-4adf-a363-7578d5d74818.pid.haproxy
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID eb14e0fb-539d-4adf-a363-7578d5d74818
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:26:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:16.737 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818', 'env', 'PROCESS_TAG=haproxy-eb14e0fb-539d-4adf-a363-7578d5d74818', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb14e0fb-539d-4adf-a363-7578d5d74818.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.886 239460 DEBUG nova.compute.manager [req-2a8111be-5dc3-4fdf-9ebf-121fe7e10373 req-0c50b58a-6061-45e0-baa4-f4882008a735 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.887 239460 DEBUG oslo_concurrency.lockutils [req-2a8111be-5dc3-4fdf-9ebf-121fe7e10373 req-0c50b58a-6061-45e0-baa4-f4882008a735 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.887 239460 DEBUG oslo_concurrency.lockutils [req-2a8111be-5dc3-4fdf-9ebf-121fe7e10373 req-0c50b58a-6061-45e0-baa4-f4882008a735 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.887 239460 DEBUG oslo_concurrency.lockutils [req-2a8111be-5dc3-4fdf-9ebf-121fe7e10373 req-0c50b58a-6061-45e0-baa4-f4882008a735 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:16 np0005601226 nova_compute[239456]: 2026-01-29 17:26:16.887 239460 DEBUG nova.compute.manager [req-2a8111be-5dc3-4fdf-9ebf-121fe7e10373 req-0c50b58a-6061-45e0-baa4-f4882008a735 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Processing event network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:26:17 np0005601226 podman[261308]: 2026-01-29 17:26:17.031643943 +0000 UTC m=+0.021850716 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:26:17 np0005601226 podman[261308]: 2026-01-29 17:26:17.213060247 +0000 UTC m=+0.203266990 container create c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.230 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.231 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707577.2302704, da7dfea4-c6b4-4092-833b-3fcb8168ecce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.231 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] VM Started (Lifecycle Event)#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.235 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.238 239460 INFO nova.virt.libvirt.driver [-] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Instance spawned successfully.#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.238 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:26:17 np0005601226 systemd[1]: Started libpod-conmon-c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825.scope.
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.275 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.281 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.282 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.282 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.282 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.283 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:17 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.283 239460 DEBUG nova.virt.libvirt.driver [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:17 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b04e386572b177b7afdbff4d0472a008ecaf146c093ff0ca78f52da3a55513/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.289 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:26:17 np0005601226 podman[261308]: 2026-01-29 17:26:17.301997788 +0000 UTC m=+0.292204591 container init c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:26:17 np0005601226 podman[261308]: 2026-01-29 17:26:17.307588887 +0000 UTC m=+0.297795640 container start c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.317 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.317 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707577.2303863, da7dfea4-c6b4-4092-833b-3fcb8168ecce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.318 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:26:17 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [NOTICE]   (261351) : New worker (261353) forked
Jan 29 12:26:17 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [NOTICE]   (261351) : Loading success.
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.343 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.348 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707577.234421, da7dfea4-c6b4-4092-833b-3fcb8168ecce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.348 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.351 239460 INFO nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Took 6.12 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.356 239460 DEBUG nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.369 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.371 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.400 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.416 239460 INFO nova.compute.manager [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Took 8.76 seconds to build instance.#033[00m
Jan 29 12:26:17 np0005601226 nova_compute[239456]: 2026-01-29 17:26:17.439 239460 DEBUG oslo_concurrency.lockutils [None req-6f2e1735-af03-4b14-91ac-ffc2939d00bd 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 1.7 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 157 KiB/s rd, 51 MiB/s wr, 240 op/s
Jan 29 12:26:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/349932791' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/349932791' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:18 np0005601226 nova_compute[239456]: 2026-01-29 17:26:18.961 239460 DEBUG nova.compute.manager [req-c966c9d7-1985-42e8-84db-c1530bdcd56b req-273c716f-6355-4ceb-b4dd-1149a0e97580 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:18 np0005601226 nova_compute[239456]: 2026-01-29 17:26:18.962 239460 DEBUG oslo_concurrency.lockutils [req-c966c9d7-1985-42e8-84db-c1530bdcd56b req-273c716f-6355-4ceb-b4dd-1149a0e97580 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:18 np0005601226 nova_compute[239456]: 2026-01-29 17:26:18.963 239460 DEBUG oslo_concurrency.lockutils [req-c966c9d7-1985-42e8-84db-c1530bdcd56b req-273c716f-6355-4ceb-b4dd-1149a0e97580 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:18 np0005601226 nova_compute[239456]: 2026-01-29 17:26:18.963 239460 DEBUG oslo_concurrency.lockutils [req-c966c9d7-1985-42e8-84db-c1530bdcd56b req-273c716f-6355-4ceb-b4dd-1149a0e97580 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:18 np0005601226 nova_compute[239456]: 2026-01-29 17:26:18.963 239460 DEBUG nova.compute.manager [req-c966c9d7-1985-42e8-84db-c1530bdcd56b req-273c716f-6355-4ceb-b4dd-1149a0e97580 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] No waiting events found dispatching network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:26:18 np0005601226 nova_compute[239456]: 2026-01-29 17:26:18.963 239460 WARNING nova.compute.manager [req-c966c9d7-1985-42e8-84db-c1530bdcd56b req-273c716f-6355-4ceb-b4dd-1149a0e97580 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received unexpected event network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:26:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 2.0 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 1.6 MiB/s rd, 75 MiB/s wr, 283 op/s
Jan 29 12:26:19 np0005601226 podman[261362]: 2026-01-29 17:26:19.872516553 +0000 UTC m=+0.044441431 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:26:19 np0005601226 podman[261363]: 2026-01-29 17:26:19.929385084 +0000 UTC m=+0.098767774 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:26:20 np0005601226 nova_compute[239456]: 2026-01-29 17:26:20.202 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:20 np0005601226 NetworkManager[49020]: <info>  [1769707580.2034] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 29 12:26:20 np0005601226 NetworkManager[49020]: <info>  [1769707580.2042] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 29 12:26:20 np0005601226 nova_compute[239456]: 2026-01-29 17:26:20.250 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:20Z|00133|binding|INFO|Releasing lport e21e4231-1978-459b-9dea-17ba78b26f45 from this chassis (sb_readonly=0)
Jan 29 12:26:20 np0005601226 nova_compute[239456]: 2026-01-29 17:26:20.264 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:20 np0005601226 nova_compute[239456]: 2026-01-29 17:26:20.273 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Jan 29 12:26:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Jan 29 12:26:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.058 239460 DEBUG nova.compute.manager [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.058 239460 DEBUG nova.compute.manager [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing instance network info cache due to event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.059 239460 DEBUG oslo_concurrency.lockutils [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.059 239460 DEBUG oslo_concurrency.lockutils [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.059 239460 DEBUG nova.network.neutron [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.192 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.524 239460 DEBUG nova.compute.manager [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.525 239460 DEBUG nova.compute.manager [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing instance network info cache due to event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:26:21 np0005601226 nova_compute[239456]: 2026-01-29 17:26:21.525 239460 DEBUG oslo_concurrency.lockutils [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 88 MiB/s wr, 354 op/s
Jan 29 12:26:22 np0005601226 nova_compute[239456]: 2026-01-29 17:26:22.073 239460 DEBUG nova.network.neutron [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updated VIF entry in instance network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:26:22 np0005601226 nova_compute[239456]: 2026-01-29 17:26:22.074 239460 DEBUG nova.network.neutron [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:22 np0005601226 nova_compute[239456]: 2026-01-29 17:26:22.094 239460 DEBUG oslo_concurrency.lockutils [req-5bb21b6d-ff9a-40c0-93bc-42871427bf6f req-6dc9fa4f-2927-4923-b18f-94ad0c12e026 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:22 np0005601226 nova_compute[239456]: 2026-01-29 17:26:22.095 239460 DEBUG oslo_concurrency.lockutils [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:22 np0005601226 nova_compute[239456]: 2026-01-29 17:26:22.096 239460 DEBUG nova.network.neutron [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:26:22 np0005601226 nova_compute[239456]: 2026-01-29 17:26:22.990 239460 DEBUG nova.network.neutron [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updated VIF entry in instance network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:26:22 np0005601226 nova_compute[239456]: 2026-01-29 17:26:22.992 239460 DEBUG nova.network.neutron [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:23 np0005601226 nova_compute[239456]: 2026-01-29 17:26:23.012 239460 DEBUG oslo_concurrency.lockutils [req-242acf8f-5229-43f7-a3db-750bb04110af req-e6ebdeda-daf6-4cf1-8313-911b097a5401 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:23 np0005601226 nova_compute[239456]: 2026-01-29 17:26:23.590 239460 DEBUG nova.compute.manager [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:23 np0005601226 nova_compute[239456]: 2026-01-29 17:26:23.591 239460 DEBUG nova.compute.manager [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing instance network info cache due to event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:26:23 np0005601226 nova_compute[239456]: 2026-01-29 17:26:23.592 239460 DEBUG oslo_concurrency.lockutils [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:23 np0005601226 nova_compute[239456]: 2026-01-29 17:26:23.592 239460 DEBUG oslo_concurrency.lockutils [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:23 np0005601226 nova_compute[239456]: 2026-01-29 17:26:23.593 239460 DEBUG nova.network.neutron [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:26:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 60 MiB/s wr, 331 op/s
Jan 29 12:26:24 np0005601226 nova_compute[239456]: 2026-01-29 17:26:24.545 239460 DEBUG nova.network.neutron [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updated VIF entry in instance network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:26:24 np0005601226 nova_compute[239456]: 2026-01-29 17:26:24.546 239460 DEBUG nova.network.neutron [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:24 np0005601226 nova_compute[239456]: 2026-01-29 17:26:24.561 239460 DEBUG oslo_concurrency.lockutils [req-1d2f0c3d-c9a6-43fd-934e-7e1ce08fddcd req-38bb7dc3-c915-42e9-a9a2-c4ce279eeb0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Jan 29 12:26:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Jan 29 12:26:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Jan 29 12:26:25 np0005601226 nova_compute[239456]: 2026-01-29 17:26:25.276 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 64 MiB/s wr, 334 op/s
Jan 29 12:26:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Jan 29 12:26:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Jan 29 12:26:25 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Jan 29 12:26:26 np0005601226 nova_compute[239456]: 2026-01-29 17:26:26.194 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Jan 29 12:26:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Jan 29 12:26:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Jan 29 12:26:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 343 KiB/s rd, 25 MiB/s wr, 103 op/s
Jan 29 12:26:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Jan 29 12:26:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Jan 29 12:26:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Jan 29 12:26:29 np0005601226 nova_compute[239456]: 2026-01-29 17:26:29.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:29 np0005601226 nova_compute[239456]: 2026-01-29 17:26:29.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:26:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 627 KiB/s rd, 14 MiB/s wr, 275 op/s
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4207864708' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4207864708' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.315 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:30Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:8c:8d 10.100.0.4
Jan 29 12:26:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:30Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:8c:8d 10.100.0.4
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/314023068' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/314023068' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.631 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.632 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.632 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.632 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:26:30 np0005601226 nova_compute[239456]: 2026-01-29 17:26:30.632 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:26:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069953933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.159 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.196 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.230 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.230 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.369 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.370 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4315MB free_disk=59.98808891605586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.370 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.370 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.440 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance da7dfea4-c6b4-4092-833b-3fcb8168ecce actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.440 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.441 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.458 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing inventories for resource provider 79259295-532c-4a51-8f50-027529735b0c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.473 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating ProviderTree inventory for provider 79259295-532c-4a51-8f50-027529735b0c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.474 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:26:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.487 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing aggregate associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 29 12:26:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Jan 29 12:26:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.515 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing trait associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, traits: HW_CPU_X86_SSE4A,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 29 12:26:31 np0005601226 nova_compute[239456]: 2026-01-29 17:26:31.546 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 610 KiB/s rd, 4.2 MiB/s wr, 275 op/s
Jan 29 12:26:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:26:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1568933111' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:26:32 np0005601226 nova_compute[239456]: 2026-01-29 17:26:32.033 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:32 np0005601226 nova_compute[239456]: 2026-01-29 17:26:32.038 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:26:32 np0005601226 nova_compute[239456]: 2026-01-29 17:26:32.055 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:26:32 np0005601226 nova_compute[239456]: 2026-01-29 17:26:32.084 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:26:32 np0005601226 nova_compute[239456]: 2026-01-29 17:26:32.085 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Jan 29 12:26:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Jan 29 12:26:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Jan 29 12:26:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 320 op/s
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.084 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.085 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.085 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.085 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.725 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.725 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.725 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:26:34 np0005601226 nova_compute[239456]: 2026-01-29 17:26:34.726 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid da7dfea4-c6b4-4092-833b-3fcb8168ecce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:26:35 np0005601226 nova_compute[239456]: 2026-01-29 17:26:35.317 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 303 op/s
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.015 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.029 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.030 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.030 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.031 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.199 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Jan 29 12:26:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Jan 29 12:26:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.876 239460 DEBUG nova.compute.manager [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.876 239460 DEBUG nova.compute.manager [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing instance network info cache due to event network-changed-11505d82-9174-4f2c-b0fa-040405d852e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.877 239460 DEBUG oslo_concurrency.lockutils [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.877 239460 DEBUG oslo_concurrency.lockutils [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.877 239460 DEBUG nova.network.neutron [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Refreshing network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.951 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.952 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.952 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.952 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.952 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.953 239460 INFO nova.compute.manager [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Terminating instance#033[00m
Jan 29 12:26:36 np0005601226 nova_compute[239456]: 2026-01-29 17:26:36.954 239460 DEBUG nova.compute.manager [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:26:36 np0005601226 kernel: tap11505d82-91 (unregistering): left promiscuous mode
Jan 29 12:26:37 np0005601226 NetworkManager[49020]: <info>  [1769707597.0009] device (tap11505d82-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:26:37 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:37Z|00134|binding|INFO|Releasing lport 11505d82-9174-4f2c-b0fa-040405d852e3 from this chassis (sb_readonly=0)
Jan 29 12:26:37 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:37Z|00135|binding|INFO|Setting lport 11505d82-9174-4f2c-b0fa-040405d852e3 down in Southbound
Jan 29 12:26:37 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:37Z|00136|binding|INFO|Removing iface tap11505d82-91 ovn-installed in OVS
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.046 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.051 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:8c:8d 10.100.0.4'], port_security=['fa:16:3e:6a:8c:8d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'da7dfea4-c6b4-4092-833b-3fcb8168ecce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb14e0fb-539d-4adf-a363-7578d5d74818', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f85466673ef54aafa261596930188fc6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae0cd238-8789-4f4d-a3b0-01aadc71310b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb488830-2e71-4458-ae31-77620b55e59f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=11505d82-9174-4f2c-b0fa-040405d852e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.052 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 11505d82-9174-4f2c-b0fa-040405d852e3 in datapath eb14e0fb-539d-4adf-a363-7578d5d74818 unbound from our chassis#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.053 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb14e0fb-539d-4adf-a363-7578d5d74818, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.054 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bb6b92-751c-47c2-85df-a8de7e342756]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.055 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818 namespace which is not needed anymore#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.060 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 29 12:26:37 np0005601226 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 11.994s CPU time.
Jan 29 12:26:37 np0005601226 systemd-machined[207561]: Machine qemu-13-instance-0000000d terminated.
Jan 29 12:26:37 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [NOTICE]   (261351) : haproxy version is 2.8.14-c23fe91
Jan 29 12:26:37 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [NOTICE]   (261351) : path to executable is /usr/sbin/haproxy
Jan 29 12:26:37 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [WARNING]  (261351) : Exiting Master process...
Jan 29 12:26:37 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [WARNING]  (261351) : Exiting Master process...
Jan 29 12:26:37 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [ALERT]    (261351) : Current worker (261353) exited with code 143 (Terminated)
Jan 29 12:26:37 np0005601226 neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818[261347]: [WARNING]  (261351) : All workers exited. Exiting... (0)
Jan 29 12:26:37 np0005601226 systemd[1]: libpod-c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825.scope: Deactivated successfully.
Jan 29 12:26:37 np0005601226 podman[261481]: 2026-01-29 17:26:37.154454983 +0000 UTC m=+0.037085764 container died c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.168 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.172 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825-userdata-shm.mount: Deactivated successfully.
Jan 29 12:26:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-38b04e386572b177b7afdbff4d0472a008ecaf146c093ff0ca78f52da3a55513-merged.mount: Deactivated successfully.
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.187 239460 INFO nova.virt.libvirt.driver [-] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Instance destroyed successfully.#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.187 239460 DEBUG nova.objects.instance [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lazy-loading 'resources' on Instance uuid da7dfea4-c6b4-4092-833b-3fcb8168ecce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:26:37 np0005601226 podman[261481]: 2026-01-29 17:26:37.194547045 +0000 UTC m=+0.077177836 container cleanup c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:26:37 np0005601226 systemd[1]: libpod-conmon-c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825.scope: Deactivated successfully.
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.209 239460 DEBUG nova.virt.libvirt.vif [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1463029357',display_name='tempest-TestVolumeBackupRestore-server-1463029357',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1463029357',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN655sdUjJyRlyTIjOVqSzCiiC3hRdOQulXLN544fJ+Fu8Qe4J50LAroKbasRmPK104qzQhOmAn9IPWg4P5yk1aDqwYqb7hvQfPaewjT4XMjIiibFm1fpI7EnV8FxiaYYw==',key_name='tempest-TestVolumeBackupRestore-1300787818',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:26:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f85466673ef54aafa261596930188fc6',ramdisk_id='',reservation_id='r-mdsa05g5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1783911643',owner_user_name='tempest-TestVolumeBackupRestore-1783911643-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:26:17Z,user_data=None,user_id='676e0657fd9a487a9e331a099119fe7e',uuid=da7dfea4-c6b4-4092-833b-3fcb8168ecce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.209 239460 DEBUG nova.network.os_vif_util [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Converting VIF {"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.210 239460 DEBUG nova.network.os_vif_util [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:8c:8d,bridge_name='br-int',has_traffic_filtering=True,id=11505d82-9174-4f2c-b0fa-040405d852e3,network=Network(eb14e0fb-539d-4adf-a363-7578d5d74818),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11505d82-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.210 239460 DEBUG os_vif [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:8c:8d,bridge_name='br-int',has_traffic_filtering=True,id=11505d82-9174-4f2c-b0fa-040405d852e3,network=Network(eb14e0fb-539d-4adf-a363-7578d5d74818),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11505d82-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.212 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.212 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11505d82-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.213 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.216 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.218 239460 INFO os_vif [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:8c:8d,bridge_name='br-int',has_traffic_filtering=True,id=11505d82-9174-4f2c-b0fa-040405d852e3,network=Network(eb14e0fb-539d-4adf-a363-7578d5d74818),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11505d82-91')#033[00m
Jan 29 12:26:37 np0005601226 podman[261520]: 2026-01-29 17:26:37.241683367 +0000 UTC m=+0.035697306 container remove c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.244 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e861fbee-36c4-44b1-8aac-1b673c23f80e]: (4, ('Thu Jan 29 05:26:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818 (c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825)\nc9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825\nThu Jan 29 05:26:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818 (c9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825)\nc9fe0dbd6bb53415ee1f08319fd9d07fe343f916aeae98cb76d47c62338bd825\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.246 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[975f3aee-7e27-4d5f-be89-caf5786761bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.247 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb14e0fb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:37 np0005601226 kernel: tapeb14e0fb-50: left promiscuous mode
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.248 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.254 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[95067861-e517-4c46-a749-97a8e512c5ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.258 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.262 239460 DEBUG nova.compute.manager [req-8f1d09b2-579a-4526-be0b-13e447073c5b req-9f104e0e-81f2-497c-8ad8-b7f526d3a519 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-vif-unplugged-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.262 239460 DEBUG oslo_concurrency.lockutils [req-8f1d09b2-579a-4526-be0b-13e447073c5b req-9f104e0e-81f2-497c-8ad8-b7f526d3a519 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.263 239460 DEBUG oslo_concurrency.lockutils [req-8f1d09b2-579a-4526-be0b-13e447073c5b req-9f104e0e-81f2-497c-8ad8-b7f526d3a519 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.263 239460 DEBUG oslo_concurrency.lockutils [req-8f1d09b2-579a-4526-be0b-13e447073c5b req-9f104e0e-81f2-497c-8ad8-b7f526d3a519 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.263 239460 DEBUG nova.compute.manager [req-8f1d09b2-579a-4526-be0b-13e447073c5b req-9f104e0e-81f2-497c-8ad8-b7f526d3a519 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] No waiting events found dispatching network-vif-unplugged-11505d82-9174-4f2c-b0fa-040405d852e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.263 239460 DEBUG nova.compute.manager [req-8f1d09b2-579a-4526-be0b-13e447073c5b req-9f104e0e-81f2-497c-8ad8-b7f526d3a519 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-vif-unplugged-11505d82-9174-4f2c-b0fa-040405d852e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.267 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa39f5c-ac6b-4eef-949b-fa289c669aa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.268 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d6afb1-e5b7-49a0-a6d1-57b5cbabc1c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.279 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2969861b-539f-4dc5-b649-ec60113baba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493007, 'reachable_time': 30765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261553, 'error': None, 'target': 'ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.282 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb14e0fb-539d-4adf-a363-7578d5d74818 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:26:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:37.282 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[1041565a-a807-47a1-9edf-3875d4bf0f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:37 np0005601226 systemd[1]: run-netns-ovnmeta\x2deb14e0fb\x2d539d\x2d4adf\x2da363\x2d7578d5d74818.mount: Deactivated successfully.
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.330 239460 INFO nova.virt.libvirt.driver [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Deleting instance files /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce_del#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.331 239460 INFO nova.virt.libvirt.driver [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Deletion of /var/lib/nova/instances/da7dfea4-c6b4-4092-833b-3fcb8168ecce_del complete#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.496 239460 INFO nova.compute.manager [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Took 0.54 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.497 239460 DEBUG oslo.service.loopingcall [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.497 239460 DEBUG nova.compute.manager [-] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:26:37 np0005601226 nova_compute[239456]: 2026-01-29 17:26:37.497 239460 DEBUG nova.network.neutron [-] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:26:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3128466442' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Jan 29 12:26:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Jan 29 12:26:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Jan 29 12:26:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 168 op/s
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.025 239460 DEBUG nova.network.neutron [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updated VIF entry in instance network info cache for port 11505d82-9174-4f2c-b0fa-040405d852e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.026 239460 DEBUG nova.network.neutron [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [{"id": "11505d82-9174-4f2c-b0fa-040405d852e3", "address": "fa:16:3e:6a:8c:8d", "network": {"id": "eb14e0fb-539d-4adf-a363-7578d5d74818", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1491363634-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f85466673ef54aafa261596930188fc6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11505d82-91", "ovs_interfaceid": "11505d82-9174-4f2c-b0fa-040405d852e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.034 239460 DEBUG nova.network.neutron [-] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.046 239460 DEBUG oslo_concurrency.lockutils [req-0683c69f-9700-4650-ac82-292acb827158 req-8d36d58c-7ce9-4218-94aa-20764e79fb50 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-da7dfea4-c6b4-4092-833b-3fcb8168ecce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.050 239460 INFO nova.compute.manager [-] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Took 0.55 seconds to deallocate network for instance.#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.203 239460 INFO nova.compute.manager [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Took 0.15 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.254 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.254 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.291 239460 DEBUG oslo_concurrency.processutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Jan 29 12:26:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Jan 29 12:26:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Jan 29 12:26:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:26:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4049307721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.828 239460 DEBUG oslo_concurrency.processutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.833 239460 DEBUG nova.compute.provider_tree [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.846 239460 DEBUG nova.scheduler.client.report [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.866 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.892 239460 INFO nova.scheduler.client.report [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Deleted allocations for instance da7dfea4-c6b4-4092-833b-3fcb8168ecce#033[00m
Jan 29 12:26:38 np0005601226 nova_compute[239456]: 2026-01-29 17:26:38.956 239460 DEBUG oslo_concurrency.lockutils [None req-b781558c-f1b8-4796-bdf0-acacb929d5cf 676e0657fd9a487a9e331a099119fe7e f85466673ef54aafa261596930188fc6 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.336 239460 DEBUG nova.compute.manager [req-c7f92f5e-92e7-41c5-87ca-d09e6b5d6c6d req-d025b812-47fd-42e6-b252-30cb5c8fa0ec 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.337 239460 DEBUG oslo_concurrency.lockutils [req-c7f92f5e-92e7-41c5-87ca-d09e6b5d6c6d req-d025b812-47fd-42e6-b252-30cb5c8fa0ec 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.337 239460 DEBUG oslo_concurrency.lockutils [req-c7f92f5e-92e7-41c5-87ca-d09e6b5d6c6d req-d025b812-47fd-42e6-b252-30cb5c8fa0ec 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.338 239460 DEBUG oslo_concurrency.lockutils [req-c7f92f5e-92e7-41c5-87ca-d09e6b5d6c6d req-d025b812-47fd-42e6-b252-30cb5c8fa0ec 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "da7dfea4-c6b4-4092-833b-3fcb8168ecce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.338 239460 DEBUG nova.compute.manager [req-c7f92f5e-92e7-41c5-87ca-d09e6b5d6c6d req-d025b812-47fd-42e6-b252-30cb5c8fa0ec 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] No waiting events found dispatching network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.339 239460 WARNING nova.compute.manager [req-c7f92f5e-92e7-41c5-87ca-d09e6b5d6c6d req-d025b812-47fd-42e6-b252-30cb5c8fa0ec 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received unexpected event network-vif-plugged-11505d82-9174-4f2c-b0fa-040405d852e3 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.339 239460 DEBUG nova.compute.manager [req-c7f92f5e-92e7-41c5-87ca-d09e6b5d6c6d req-d025b812-47fd-42e6-b252-30cb5c8fa0ec 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Received event network-vif-deleted-11505d82-9174-4f2c-b0fa-040405d852e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:39 np0005601226 nova_compute[239456]: 2026-01-29 17:26:39.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 2.4 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.9 MiB/s wr, 272 op/s
Jan 29 12:26:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:40.287 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:40.288 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:40.288 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:26:40
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:26:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Jan 29 12:26:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Jan 29 12:26:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:26:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:26:41 np0005601226 nova_compute[239456]: 2026-01-29 17:26:41.200 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3360987161' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3360987161' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1463933638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1463933638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.9 MiB/s wr, 247 op/s
Jan 29 12:26:42 np0005601226 nova_compute[239456]: 2026-01-29 17:26:42.214 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Jan 29 12:26:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Jan 29 12:26:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Jan 29 12:26:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3246637583' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3246637583' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.1 MiB/s wr, 282 op/s
Jan 29 12:26:44 np0005601226 nova_compute[239456]: 2026-01-29 17:26:44.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:26:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Jan 29 12:26:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Jan 29 12:26:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/608012538' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/608012538' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.738165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707605738228, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2454, "num_deletes": 262, "total_data_size": 3514006, "memory_usage": 3576208, "flush_reason": "Manual Compaction"}
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707605762682, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3444707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26638, "largest_seqno": 29091, "table_properties": {"data_size": 3432907, "index_size": 7782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 24912, "raw_average_key_size": 21, "raw_value_size": 3409243, "raw_average_value_size": 2944, "num_data_blocks": 335, "num_entries": 1158, "num_filter_entries": 1158, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707453, "oldest_key_time": 1769707453, "file_creation_time": 1769707605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24554 microseconds, and 6538 cpu microseconds.
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.762720) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3444707 bytes OK
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.762736) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.767749) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.767765) EVENT_LOG_v1 {"time_micros": 1769707605767761, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.767781) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3503370, prev total WAL file size 3503370, number of live WAL files 2.
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.768616) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3363KB)], [59(7967KB)]
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707605768672, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11603661, "oldest_snapshot_seqno": -1}
Jan 29 12:26:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.2 MiB/s rd, 200 KiB/s wr, 289 op/s
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5831 keys, 9808101 bytes, temperature: kUnknown
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707605844848, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9808101, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9764602, "index_size": 27837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 145754, "raw_average_key_size": 24, "raw_value_size": 9655139, "raw_average_value_size": 1655, "num_data_blocks": 1128, "num_entries": 5831, "num_filter_entries": 5831, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.845540) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9808101 bytes
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.848760) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.2 rd, 127.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 6363, records dropped: 532 output_compression: NoCompression
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.848795) EVENT_LOG_v1 {"time_micros": 1769707605848777, "job": 32, "event": "compaction_finished", "compaction_time_micros": 76723, "compaction_time_cpu_micros": 15022, "output_level": 6, "num_output_files": 1, "total_output_size": 9808101, "num_input_records": 6363, "num_output_records": 5831, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707605849868, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707605851050, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.768519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.851298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.851306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.851309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.851311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:26:45 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:26:45.851314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:26:46 np0005601226 podman[261721]: 2026-01-29 17:26:46.085442438 +0000 UTC m=+0.041328246 container create f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.105 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.106 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:46 np0005601226 systemd[1]: Started libpod-conmon-f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee.scope.
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.130 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:26:46 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:46 np0005601226 podman[261721]: 2026-01-29 17:26:46.151812705 +0000 UTC m=+0.107698543 container init f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gould, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 12:26:46 np0005601226 podman[261721]: 2026-01-29 17:26:46.156339386 +0000 UTC m=+0.112225204 container start f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gould, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:26:46 np0005601226 determined_gould[261737]: 167 167
Jan 29 12:26:46 np0005601226 systemd[1]: libpod-f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee.scope: Deactivated successfully.
Jan 29 12:26:46 np0005601226 podman[261721]: 2026-01-29 17:26:46.159963623 +0000 UTC m=+0.115849441 container attach f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 12:26:46 np0005601226 podman[261721]: 2026-01-29 17:26:46.160290542 +0000 UTC m=+0.116176370 container died f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 12:26:46 np0005601226 podman[261721]: 2026-01-29 17:26:46.068778632 +0000 UTC m=+0.024664470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:26:46 np0005601226 systemd[1]: var-lib-containers-storage-overlay-993004dca84fdf63962cae3305d284c8af72e1c9dd62f59dcdf0904c28dab9d8-merged.mount: Deactivated successfully.
Jan 29 12:26:46 np0005601226 podman[261721]: 2026-01-29 17:26:46.1957392 +0000 UTC m=+0.151625028 container remove f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gould, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.203 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:46 np0005601226 systemd[1]: libpod-conmon-f5d0f7726f1d86b0b05450c189e163db7ab4a79bf2abe191e62b12e00be036ee.scope: Deactivated successfully.
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.211 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.211 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.218 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.219 239460 INFO nova.compute.claims [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:26:46 np0005601226 podman[261761]: 2026-01-29 17:26:46.307943972 +0000 UTC m=+0.039308673 container create cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_vaughan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.320 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:46 np0005601226 systemd[1]: Started libpod-conmon-cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb.scope.
Jan 29 12:26:46 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2dfe1fef019d9e9ea6843c8a9e1ae7fae81b46a0b6d5e2cae26e0bf0861534d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2dfe1fef019d9e9ea6843c8a9e1ae7fae81b46a0b6d5e2cae26e0bf0861534d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2dfe1fef019d9e9ea6843c8a9e1ae7fae81b46a0b6d5e2cae26e0bf0861534d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2dfe1fef019d9e9ea6843c8a9e1ae7fae81b46a0b6d5e2cae26e0bf0861534d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:46 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2dfe1fef019d9e9ea6843c8a9e1ae7fae81b46a0b6d5e2cae26e0bf0861534d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:46 np0005601226 podman[261761]: 2026-01-29 17:26:46.292719585 +0000 UTC m=+0.024084286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:26:46 np0005601226 podman[261761]: 2026-01-29 17:26:46.390166513 +0000 UTC m=+0.121531244 container init cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_vaughan, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:26:46 np0005601226 podman[261761]: 2026-01-29 17:26:46.398713381 +0000 UTC m=+0.130078082 container start cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_vaughan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:26:46 np0005601226 podman[261761]: 2026-01-29 17:26:46.402066871 +0000 UTC m=+0.133431562 container attach cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.458 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.552 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:46 np0005601226 stupefied_vaughan[261778]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:26:46 np0005601226 stupefied_vaughan[261778]: --> All data devices are unavailable
Jan 29 12:26:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60992385' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60992385' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:46 np0005601226 systemd[1]: libpod-cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb.scope: Deactivated successfully.
Jan 29 12:26:46 np0005601226 conmon[261778]: conmon cb056fe212bb4df502ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb.scope/container/memory.events
Jan 29 12:26:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:26:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2739245204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.878 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:46 np0005601226 podman[261818]: 2026-01-29 17:26:46.882686492 +0000 UTC m=+0.030623560 container died cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_vaughan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.885 239460 DEBUG nova.compute.provider_tree [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:26:46 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c2dfe1fef019d9e9ea6843c8a9e1ae7fae81b46a0b6d5e2cae26e0bf0861534d-merged.mount: Deactivated successfully.
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.909 239460 DEBUG nova.scheduler.client.report [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:26:46 np0005601226 podman[261818]: 2026-01-29 17:26:46.920232677 +0000 UTC m=+0.068169705 container remove cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_vaughan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 29 12:26:46 np0005601226 systemd[1]: libpod-conmon-cb056fe212bb4df502ca3464aecde7e21b4cc1b80e5a2b40c64aec0188eb33eb.scope: Deactivated successfully.
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.930 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.931 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.980 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:26:46 np0005601226 nova_compute[239456]: 2026-01-29 17:26:46.980 239460 DEBUG nova.network.neutron [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.000 239460 INFO nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.054 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.096 239460 INFO nova.virt.block_device [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Booting with volume 0671502e-7b1c-4ff5-b298-52c42bac4b3d at /dev/vdb#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.154 239460 DEBUG nova.policy [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00a09ccd681d42068127585c610c2bba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7e911c19f694429a9441fe3c0072af6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.215 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.221 239460 DEBUG os_brick.utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.223 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.236 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.237 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[06b619e7-3f52-44fd-a2f1-f079e22cbbe2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.240 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.247 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.248 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[421f7a08-527e-4c87-968f-67e85bcffa5b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.251 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.259 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.259 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[a63d7780-8969-4de2-9d49-31405671e49f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.261 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[7512381b-947e-49df-8328-310e62feef14]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.262 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.282 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.284 239460 DEBUG os_brick.initiator.connectors.lightos [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.284 239460 DEBUG os_brick.initiator.connectors.lightos [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.285 239460 DEBUG os_brick.initiator.connectors.lightos [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.285 239460 DEBUG os_brick.utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.285 239460 DEBUG nova.virt.block_device [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Updating existing volume attachment record: e19d2708-e0f9-4940-854f-af81a1865bfa _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:26:47 np0005601226 podman[261905]: 2026-01-29 17:26:47.334073491 +0000 UTC m=+0.038547632 container create 229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 12:26:47 np0005601226 systemd[1]: Started libpod-conmon-229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9.scope.
Jan 29 12:26:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:47 np0005601226 podman[261905]: 2026-01-29 17:26:47.405357668 +0000 UTC m=+0.109831819 container init 229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_yalow, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:26:47 np0005601226 podman[261905]: 2026-01-29 17:26:47.410993749 +0000 UTC m=+0.115467870 container start 229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:26:47 np0005601226 podman[261905]: 2026-01-29 17:26:47.414116182 +0000 UTC m=+0.118590293 container attach 229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_yalow, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:26:47 np0005601226 awesome_yalow[261921]: 167 167
Jan 29 12:26:47 np0005601226 systemd[1]: libpod-229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9.scope: Deactivated successfully.
Jan 29 12:26:47 np0005601226 podman[261905]: 2026-01-29 17:26:47.319346007 +0000 UTC m=+0.023820128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:26:47 np0005601226 podman[261905]: 2026-01-29 17:26:47.415811959 +0000 UTC m=+0.120286090 container died 229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:26:47 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c1f813615698a1f163f3fa82db7ecdf8286ee5de2a748f73d5b50d34960b83e9-merged.mount: Deactivated successfully.
Jan 29 12:26:47 np0005601226 podman[261905]: 2026-01-29 17:26:47.455652964 +0000 UTC m=+0.160127065 container remove 229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:26:47 np0005601226 systemd[1]: libpod-conmon-229a2ecd8af4c485ef7fe4482fa3a078773e5525284d0cef281a2b7c615235c9.scope: Deactivated successfully.
Jan 29 12:26:47 np0005601226 podman[261946]: 2026-01-29 17:26:47.596778941 +0000 UTC m=+0.041147093 container create df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 12:26:47 np0005601226 systemd[1]: Started libpod-conmon-df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a.scope.
Jan 29 12:26:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f9a7348ea5a45a4a1a81f1002f5accda2e9b4332620bee1355ee3e42cb46b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f9a7348ea5a45a4a1a81f1002f5accda2e9b4332620bee1355ee3e42cb46b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f9a7348ea5a45a4a1a81f1002f5accda2e9b4332620bee1355ee3e42cb46b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f9a7348ea5a45a4a1a81f1002f5accda2e9b4332620bee1355ee3e42cb46b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:47 np0005601226 podman[261946]: 2026-01-29 17:26:47.578769909 +0000 UTC m=+0.023138101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:26:47 np0005601226 podman[261946]: 2026-01-29 17:26:47.681538778 +0000 UTC m=+0.125906940 container init df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:26:47 np0005601226 podman[261946]: 2026-01-29 17:26:47.690044656 +0000 UTC m=+0.134412788 container start df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galois, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 12:26:47 np0005601226 podman[261946]: 2026-01-29 17:26:47.693740125 +0000 UTC m=+0.138108257 container attach df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 12:26:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 140 KiB/s rd, 7.0 KiB/s wr, 192 op/s
Jan 29 12:26:47 np0005601226 nova_compute[239456]: 2026-01-29 17:26:47.875 239460 DEBUG nova.network.neutron [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Successfully created port: 3824d6e0-3bd9-401d-b8dc-efb864f7d883 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]: {
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:    "0": [
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:        {
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "devices": [
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "/dev/loop3"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            ],
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_name": "ceph_lv0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_size": "21470642176",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "name": "ceph_lv0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "tags": {
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cluster_name": "ceph",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.crush_device_class": "",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.encrypted": "0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.objectstore": "bluestore",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osd_id": "0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.type": "block",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.vdo": "0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.with_tpm": "0"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            },
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "type": "block",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "vg_name": "ceph_vg0"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:        }
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:    ],
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:    "1": [
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:        {
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "devices": [
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "/dev/loop4"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            ],
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_name": "ceph_lv1",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_size": "21470642176",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "name": "ceph_lv1",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "tags": {
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cluster_name": "ceph",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.crush_device_class": "",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.encrypted": "0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.objectstore": "bluestore",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osd_id": "1",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.type": "block",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.vdo": "0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.with_tpm": "0"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            },
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "type": "block",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "vg_name": "ceph_vg1"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:        }
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:    ],
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:    "2": [
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:        {
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "devices": [
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "/dev/loop5"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            ],
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_name": "ceph_lv2",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_size": "21470642176",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "name": "ceph_lv2",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "tags": {
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.cluster_name": "ceph",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.crush_device_class": "",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.encrypted": "0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.objectstore": "bluestore",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osd_id": "2",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.type": "block",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.vdo": "0",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:                "ceph.with_tpm": "0"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            },
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "type": "block",
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:            "vg_name": "ceph_vg2"
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:        }
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]:    ]
Jan 29 12:26:47 np0005601226 vigorous_galois[261963]: }
Jan 29 12:26:47 np0005601226 systemd[1]: libpod-df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a.scope: Deactivated successfully.
Jan 29 12:26:47 np0005601226 podman[261946]: 2026-01-29 17:26:47.987609869 +0000 UTC m=+0.431978021 container died df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galois, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:26:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-df2f9a7348ea5a45a4a1a81f1002f5accda2e9b4332620bee1355ee3e42cb46b-merged.mount: Deactivated successfully.
Jan 29 12:26:48 np0005601226 podman[261946]: 2026-01-29 17:26:48.02650971 +0000 UTC m=+0.470877862 container remove df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galois, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 12:26:48 np0005601226 systemd[1]: libpod-conmon-df1445f102305322acff5e66c585fc1c506c05dc776f01188a890911b37bc65a.scope: Deactivated successfully.
Jan 29 12:26:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/652712094' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:48 np0005601226 podman[262046]: 2026-01-29 17:26:48.41218014 +0000 UTC m=+0.043182017 container create 2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:26:48 np0005601226 systemd[1]: Started libpod-conmon-2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b.scope.
Jan 29 12:26:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:48 np0005601226 podman[262046]: 2026-01-29 17:26:48.471990641 +0000 UTC m=+0.102992528 container init 2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:26:48 np0005601226 podman[262046]: 2026-01-29 17:26:48.476491171 +0000 UTC m=+0.107493038 container start 2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:26:48 np0005601226 pedantic_faraday[262063]: 167 167
Jan 29 12:26:48 np0005601226 podman[262046]: 2026-01-29 17:26:48.478976798 +0000 UTC m=+0.109978695 container attach 2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 12:26:48 np0005601226 systemd[1]: libpod-2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b.scope: Deactivated successfully.
Jan 29 12:26:48 np0005601226 conmon[262063]: conmon 2c94410e35a13fe05f26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b.scope/container/memory.events
Jan 29 12:26:48 np0005601226 podman[262046]: 2026-01-29 17:26:48.480649852 +0000 UTC m=+0.111651759 container died 2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 12:26:48 np0005601226 podman[262046]: 2026-01-29 17:26:48.396477 +0000 UTC m=+0.027478887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:26:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1118a0d537406f8a8f1ce6a436c33dcacd3ace4ff6473366565a0355e5f2748f-merged.mount: Deactivated successfully.
Jan 29 12:26:48 np0005601226 podman[262046]: 2026-01-29 17:26:48.521858585 +0000 UTC m=+0.152860452 container remove 2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:26:48 np0005601226 systemd[1]: libpod-conmon-2c94410e35a13fe05f26fcc4fe1686c9725ef6c62cacdb0f437630041e5a7c4b.scope: Deactivated successfully.
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.579 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.582 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.582 239460 INFO nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Creating image(s)#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.603 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:48 np0005601226 podman[262086]: 2026-01-29 17:26:48.622752925 +0000 UTC m=+0.032758628 container create 7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.622 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.643 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.648 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:48 np0005601226 systemd[1]: Started libpod-conmon-7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337.scope.
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.667 239460 DEBUG nova.network.neutron [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Successfully updated port: 3824d6e0-3bd9-401d-b8dc-efb864f7d883 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:26:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c43d04ef48aadadff9a55924805cd0733e4cf811f515ff53dbe561735d6272/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c43d04ef48aadadff9a55924805cd0733e4cf811f515ff53dbe561735d6272/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c43d04ef48aadadff9a55924805cd0733e4cf811f515ff53dbe561735d6272/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c43d04ef48aadadff9a55924805cd0733e4cf811f515ff53dbe561735d6272/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.689 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.689 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquired lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.689 239460 DEBUG nova.network.neutron [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:26:48 np0005601226 podman[262086]: 2026-01-29 17:26:48.704275367 +0000 UTC m=+0.114281070 container init 7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_neumann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:26:48 np0005601226 podman[262086]: 2026-01-29 17:26:48.608504003 +0000 UTC m=+0.018509786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:26:48 np0005601226 podman[262086]: 2026-01-29 17:26:48.709393153 +0000 UTC m=+0.119398856 container start 7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 12:26:48 np0005601226 podman[262086]: 2026-01-29 17:26:48.71228669 +0000 UTC m=+0.122292393 container attach 7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_neumann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.723 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.723 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.724 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.724 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.742 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.745 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.768 239460 DEBUG nova.compute.manager [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Received event network-changed-3824d6e0-3bd9-401d-b8dc-efb864f7d883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.768 239460 DEBUG nova.compute.manager [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Refreshing instance network info cache due to event network-changed-3824d6e0-3bd9-401d-b8dc-efb864f7d883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.769 239460 DEBUG oslo_concurrency.lockutils [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:48 np0005601226 nova_compute[239456]: 2026-01-29 17:26:48.869 239460 DEBUG nova.network.neutron [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.066 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.151 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] resizing rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.221 239460 DEBUG nova.objects.instance [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lazy-loading 'migration_context' on Instance uuid ff8fb3ad-e9ef-400e-9283-ac1884d5aa67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.243 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.244 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Ensure instance console log exists: /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.244 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.244 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:49 np0005601226 nova_compute[239456]: 2026-01-29 17:26:49.244 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:49 np0005601226 lvm[262348]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:26:49 np0005601226 lvm[262348]: VG ceph_vg0 finished
Jan 29 12:26:49 np0005601226 lvm[262349]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:26:49 np0005601226 lvm[262349]: VG ceph_vg1 finished
Jan 29 12:26:49 np0005601226 lvm[262351]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:26:49 np0005601226 lvm[262351]: VG ceph_vg2 finished
Jan 29 12:26:49 np0005601226 eager_neumann[262159]: {}
Jan 29 12:26:49 np0005601226 systemd[1]: libpod-7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337.scope: Deactivated successfully.
Jan 29 12:26:49 np0005601226 podman[262086]: 2026-01-29 17:26:49.429265417 +0000 UTC m=+0.839271140 container died 7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:26:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-44c43d04ef48aadadff9a55924805cd0733e4cf811f515ff53dbe561735d6272-merged.mount: Deactivated successfully.
Jan 29 12:26:49 np0005601226 podman[262086]: 2026-01-29 17:26:49.467264764 +0000 UTC m=+0.877270467 container remove 7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:26:49 np0005601226 systemd[1]: libpod-conmon-7f4554eb1f591023cbaac1a594248e06ea20e514dc4ab7a09241ce7fd3e59337.scope: Deactivated successfully.
Jan 29 12:26:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:26:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:26:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:26:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:26:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 173 KiB/s rd, 76 KiB/s wr, 237 op/s
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.046 239460 DEBUG nova.network.neutron [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Updating instance_info_cache with network_info: [{"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.067 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Releasing lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.067 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Instance network_info: |[{"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.068 239460 DEBUG oslo_concurrency.lockutils [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.068 239460 DEBUG nova.network.neutron [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Refreshing network info cache for port 3824d6e0-3bd9-401d-b8dc-efb864f7d883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.075 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Start _get_guest_xml network_info=[{"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vdb', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'e19d2708-e0f9-4940-854f-af81a1865bfa', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0671502e-7b1c-4ff5-b298-52c42bac4b3d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0671502e-7b1c-4ff5-b298-52c42bac4b3d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ff8fb3ad-e9ef-400e-9283-ac1884d5aa67', 'attached_at': '', 'detached_at': '', 'volume_id': '0671502e-7b1c-4ff5-b298-52c42bac4b3d', 'serial': '0671502e-7b1c-4ff5-b298-52c42bac4b3d'}, 'delete_on_termination': False, 'boot_index': -1, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.081 239460 WARNING nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.090 239460 DEBUG nova.virt.libvirt.host [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.091 239460 DEBUG nova.virt.libvirt.host [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.098 239460 DEBUG nova.virt.libvirt.host [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.099 239460 DEBUG nova.virt.libvirt.host [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.099 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.099 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.100 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.100 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.100 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.100 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.101 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.101 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.101 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.101 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.101 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.102 239460 DEBUG nova.virt.hardware [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.104 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:26:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:26:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1081559561' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.642 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.658 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:50 np0005601226 nova_compute[239456]: 2026-01-29 17:26:50.660 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:50 np0005601226 podman[262450]: 2026-01-29 17:26:50.874907591 +0000 UTC m=+0.041517212 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:26:50 np0005601226 podman[262451]: 2026-01-29 17:26:50.929089301 +0000 UTC m=+0.095365423 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 29 12:26:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:26:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061624483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.142 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.177 239460 DEBUG nova.virt.libvirt.vif [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-892882788',display_name='tempest-instance-892882788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-892882788',id=14,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHkN2iRP+g4jjJil8MXCTgLiHQht7Kr8GTkJER/vAu0Qvlo3hFW1+EBjRCA47cgRQxBRHZQms5dh3rJRoVscSuAtRwxGDdj7RqfPp4xxGzV/036ZyGtMvyD9f6jcqlTQg==',key_name='tempest-keypair-863984616',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7e911c19f694429a9441fe3c0072af6',ramdisk_id='',reservation_id='r-jpof4br8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-624255791',owner_user_name='tempest-VolumesBackupsTest-624255791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:26:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00a09ccd681d42068127585c610c2bba',uuid=ff8fb3ad-e9ef-400e-9283-ac1884d5aa67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.178 239460 DEBUG nova.network.os_vif_util [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Converting VIF {"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.179 239460 DEBUG nova.network.os_vif_util [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:f6:93,bridge_name='br-int',has_traffic_filtering=True,id=3824d6e0-3bd9-401d-b8dc-efb864f7d883,network=Network(b7a05385-9eda-4947-9882-d897c0742e88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3824d6e0-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.181 239460 DEBUG nova.objects.instance [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lazy-loading 'pci_devices' on Instance uuid ff8fb3ad-e9ef-400e-9283-ac1884d5aa67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.196 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <uuid>ff8fb3ad-e9ef-400e-9283-ac1884d5aa67</uuid>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <name>instance-0000000e</name>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <nova:name>tempest-instance-892882788</nova:name>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:26:50</nova:creationTime>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:user uuid="00a09ccd681d42068127585c610c2bba">tempest-VolumesBackupsTest-624255791-project-member</nova:user>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:project uuid="b7e911c19f694429a9441fe3c0072af6">tempest-VolumesBackupsTest-624255791</nova:project>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <nova:port uuid="3824d6e0-3bd9-401d-b8dc-efb864f7d883">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <entry name="serial">ff8fb3ad-e9ef-400e-9283-ac1884d5aa67</entry>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <entry name="uuid">ff8fb3ad-e9ef-400e-9283-ac1884d5aa67</entry>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk.config">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-0671502e-7b1c-4ff5-b298-52c42bac4b3d">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <target dev="vdb" bus="virtio"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <serial>0671502e-7b1c-4ff5-b298-52c42bac4b3d</serial>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:6f:f6:93"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <target dev="tap3824d6e0-3b"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/console.log" append="off"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:26:51 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:26:51 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:26:51 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:26:51 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.197 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Preparing to wait for external event network-vif-plugged-3824d6e0-3bd9-401d-b8dc-efb864f7d883 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.197 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.197 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.197 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.198 239460 DEBUG nova.virt.libvirt.vif [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-892882788',display_name='tempest-instance-892882788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-892882788',id=14,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHkN2iRP+g4jjJil8MXCTgLiHQht7Kr8GTkJER/vAu0Qvlo3hFW1+EBjRCA47cgRQxBRHZQms5dh3rJRoVscSuAtRwxGDdj7RqfPp4xxGzV/036ZyGtMvyD9f6jcqlTQg==',key_name='tempest-keypair-863984616',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7e911c19f694429a9441fe3c0072af6',ramdisk_id='',reservation_id='r-jpof4br8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-624255791',owner_user_name='tempest-VolumesBackupsTest-624255791-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:26:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00a09ccd681d42068127585c610c2bba',uuid=ff8fb3ad-e9ef-400e-9283-ac1884d5aa67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.199 239460 DEBUG nova.network.os_vif_util [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Converting VIF {"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.199 239460 DEBUG nova.network.os_vif_util [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:f6:93,bridge_name='br-int',has_traffic_filtering=True,id=3824d6e0-3bd9-401d-b8dc-efb864f7d883,network=Network(b7a05385-9eda-4947-9882-d897c0742e88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3824d6e0-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.200 239460 DEBUG os_vif [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:f6:93,bridge_name='br-int',has_traffic_filtering=True,id=3824d6e0-3bd9-401d-b8dc-efb864f7d883,network=Network(b7a05385-9eda-4947-9882-d897c0742e88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3824d6e0-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.200 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.201 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.201 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.206 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.207 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.207 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3824d6e0-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.208 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3824d6e0-3b, col_values=(('external_ids', {'iface-id': '3824d6e0-3bd9-401d-b8dc-efb864f7d883', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:f6:93', 'vm-uuid': 'ff8fb3ad-e9ef-400e-9283-ac1884d5aa67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.210 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:26:51 np0005601226 NetworkManager[49020]: <info>  [1769707611.2118] manager: (tap3824d6e0-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.214 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.215 239460 INFO os_vif [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:f6:93,bridge_name='br-int',has_traffic_filtering=True,id=3824d6e0-3bd9-401d-b8dc-efb864f7d883,network=Network(b7a05385-9eda-4947-9882-d897c0742e88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3824d6e0-3b')#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.271 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.271 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.271 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.271 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] No VIF found with MAC fa:16:3e:6f:f6:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.272 239460 INFO nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Using config drive#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.291 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.387 239460 DEBUG nova.network.neutron [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Updated VIF entry in instance network info cache for port 3824d6e0-3bd9-401d-b8dc-efb864f7d883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.388 239460 DEBUG nova.network.neutron [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Updating instance_info_cache with network_info: [{"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.402 239460 DEBUG oslo_concurrency.lockutils [req-2debbe21-5a06-4b96-8dcc-ed66fb394449 req-cb2218a3-c9ff-4b0e-9a4d-0a2e43f8dc08 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Jan 29 12:26:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Jan 29 12:26:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002598894475345819 of space, bias 1.0, pg target 0.07796683426037457 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.034044627310545514 of space, bias 1.0, pg target 10.213388193163654 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00034716474425399135 of space, bias 1.0, pg target 0.10067777583365749 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671455787201137 of space, bias 1.0, pg target 0.19347221782883298 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4696657141414543e-06 of space, bias 4.0, pg target 0.001704812228404087 quantized to 16 (current 16)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.591 239460 INFO nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Creating config drive at /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/disk.config#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.595 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqh2tzxmk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.715 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqh2tzxmk" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.740 239460 DEBUG nova.storage.rbd_utils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] rbd image ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.744 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/disk.config ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:26:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 155 KiB/s rd, 1.7 MiB/s wr, 215 op/s
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.840 239460 DEBUG oslo_concurrency.processutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/disk.config ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.841 239460 INFO nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Deleting local config drive /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67/disk.config because it was imported into RBD.#033[00m
Jan 29 12:26:51 np0005601226 NetworkManager[49020]: <info>  [1769707611.8821] manager: (tap3824d6e0-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 29 12:26:51 np0005601226 kernel: tap3824d6e0-3b: entered promiscuous mode
Jan 29 12:26:51 np0005601226 systemd-udevd[262350]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:26:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:51Z|00137|binding|INFO|Claiming lport 3824d6e0-3bd9-401d-b8dc-efb864f7d883 for this chassis.
Jan 29 12:26:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:51Z|00138|binding|INFO|3824d6e0-3bd9-401d-b8dc-efb864f7d883: Claiming fa:16:3e:6f:f6:93 10.100.0.4
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.884 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.887 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.891 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 NetworkManager[49020]: <info>  [1769707611.8972] device (tap3824d6e0-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:26:51 np0005601226 NetworkManager[49020]: <info>  [1769707611.8985] device (tap3824d6e0-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.897 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:f6:93 10.100.0.4'], port_security=['fa:16:3e:6f:f6:93 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ff8fb3ad-e9ef-400e-9283-ac1884d5aa67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7a05385-9eda-4947-9882-d897c0742e88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7e911c19f694429a9441fe3c0072af6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09948a85-bec7-4ae0-b35e-b1e8d7e00825', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a2f3ab6-9316-43c8-ba3d-949e97002105, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3824d6e0-3bd9-401d-b8dc-efb864f7d883) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.898 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3824d6e0-3bd9-401d-b8dc-efb864f7d883 in datapath b7a05385-9eda-4947-9882-d897c0742e88 bound to our chassis#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.900 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7a05385-9eda-4947-9882-d897c0742e88#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.908 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[aba39690-80cf-4c8d-ae3d-c84b1e164d09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.909 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb7a05385-91 in ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.910 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb7a05385-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.910 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[20bcdc45-ee91-47ee-9fc4-528834390c70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.910 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[368ecab2-cc28-4c3b-853c-db7dfb6a090a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 systemd-machined[207561]: New machine qemu-14-instance-0000000e.
Jan 29 12:26:51 np0005601226 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.918 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.919 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1a5396-ab87-42e3-8013-8c88e4e97888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:51Z|00139|binding|INFO|Setting lport 3824d6e0-3bd9-401d-b8dc-efb864f7d883 ovn-installed in OVS
Jan 29 12:26:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:51Z|00140|binding|INFO|Setting lport 3824d6e0-3bd9-401d-b8dc-efb864f7d883 up in Southbound
Jan 29 12:26:51 np0005601226 nova_compute[239456]: 2026-01-29 17:26:51.922 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.928 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ce1f32-cbd5-46dc-8115-cdf9ad6dd184]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.947 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[2b9853e2-7f29-4e84-95c3-3fc4f30ed5d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.952 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[97c6d91f-f2bf-4599-998d-7455d07466e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 NetworkManager[49020]: <info>  [1769707611.9546] manager: (tapb7a05385-90): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.976 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[662a9c95-35f8-400c-ab5a-8ce430d61c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:51.978 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[07c76e52-7451-48d6-af84-c0472c1b1597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:51 np0005601226 NetworkManager[49020]: <info>  [1769707611.9982] device (tapb7a05385-90): carrier: link connected
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.001 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7dda0a-dffd-4d6c-ac9d-95873d74cc01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.012 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4ffe0e96-520c-4e3f-90b7-967b49268c0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7a05385-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:aa:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496550, 'reachable_time': 32290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262601, 'error': None, 'target': 'ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.020 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e9893ecd-3d1c-467e-98b2-e92827b25aa5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:aa22'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496550, 'tstamp': 496550}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262602, 'error': None, 'target': 'ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.030 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4074c93d-f6bc-46d0-8e15-2877cbba89e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7a05385-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:aa:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 48], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496550, 'reachable_time': 32290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262603, 'error': None, 'target': 'ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.045 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6c354a3e-d3d0-4566-b38a-d42ff90224d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.076 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c60aefda-9cb9-4073-9c68-0f2578fe63fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.078 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7a05385-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.078 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.079 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7a05385-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.080 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:52 np0005601226 NetworkManager[49020]: <info>  [1769707612.0811] manager: (tapb7a05385-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 29 12:26:52 np0005601226 kernel: tapb7a05385-90: entered promiscuous mode
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.084 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.085 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7a05385-90, col_values=(('external_ids', {'iface-id': '89ac3ba8-bf4c-4fd5-a3a0-22b192c468ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.086 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:52 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:52Z|00141|binding|INFO|Releasing lport 89ac3ba8-bf4c-4fd5-a3a0-22b192c468ed from this chassis (sb_readonly=0)
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.093 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.094 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b7a05385-9eda-4947-9882-d897c0742e88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b7a05385-9eda-4947-9882-d897c0742e88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.094 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1a143156-b159-4df2-a01e-0762c5c7cac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.095 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-b7a05385-9eda-4947-9882-d897c0742e88
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/b7a05385-9eda-4947-9882-d897c0742e88.pid.haproxy
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID b7a05385-9eda-4947-9882-d897c0742e88
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:26:52 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:26:52.095 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88', 'env', 'PROCESS_TAG=haproxy-b7a05385-9eda-4947-9882-d897c0742e88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b7a05385-9eda-4947-9882-d897c0742e88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.181 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707597.1797142, da7dfea4-c6b4-4092-833b-3fcb8168ecce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.181 239460 INFO nova.compute.manager [-] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.199 239460 DEBUG nova.compute.manager [req-4e3d8722-e715-45e4-b15e-bc0fd360d791 req-815525ec-ca0a-4dcd-9bb6-9fe7975119e5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Received event network-vif-plugged-3824d6e0-3bd9-401d-b8dc-efb864f7d883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.200 239460 DEBUG oslo_concurrency.lockutils [req-4e3d8722-e715-45e4-b15e-bc0fd360d791 req-815525ec-ca0a-4dcd-9bb6-9fe7975119e5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.200 239460 DEBUG oslo_concurrency.lockutils [req-4e3d8722-e715-45e4-b15e-bc0fd360d791 req-815525ec-ca0a-4dcd-9bb6-9fe7975119e5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.200 239460 DEBUG oslo_concurrency.lockutils [req-4e3d8722-e715-45e4-b15e-bc0fd360d791 req-815525ec-ca0a-4dcd-9bb6-9fe7975119e5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.203 239460 DEBUG nova.compute.manager [req-4e3d8722-e715-45e4-b15e-bc0fd360d791 req-815525ec-ca0a-4dcd-9bb6-9fe7975119e5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Processing event network-vif-plugged-3824d6e0-3bd9-401d-b8dc-efb864f7d883 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.204 239460 DEBUG nova.compute.manager [None req-8edcb040-fbe3-4cd0-8ee9-07d474d91939 - - - - - -] [instance: da7dfea4-c6b4-4092-833b-3fcb8168ecce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.294 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.295 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707612.2948375, ff8fb3ad-e9ef-400e-9283-ac1884d5aa67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.296 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] VM Started (Lifecycle Event)#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.298 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.300 239460 INFO nova.virt.libvirt.driver [-] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Instance spawned successfully.#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.301 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.315 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.319 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.327 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.328 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.329 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.329 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.330 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.330 239460 DEBUG nova.virt.libvirt.driver [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.337 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.338 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707612.294917, ff8fb3ad-e9ef-400e-9283-ac1884d5aa67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.338 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.376 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.379 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707612.2964492, ff8fb3ad-e9ef-400e-9283-ac1884d5aa67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.379 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:26:52 np0005601226 podman[262695]: 2026-01-29 17:26:52.400375132 +0000 UTC m=+0.040755313 container create 1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.403 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.411 239460 INFO nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Took 3.83 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.412 239460 DEBUG nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.415 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.442 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:26:52 np0005601226 systemd[1]: Started libpod-conmon-1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a.scope.
Jan 29 12:26:52 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:26:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72b01f94bbe251feb096538ea3f24492ae8766c3a19c9aaf599327ee4a95fe17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:26:52 np0005601226 podman[262695]: 2026-01-29 17:26:52.379718399 +0000 UTC m=+0.020098590 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:26:52 np0005601226 podman[262695]: 2026-01-29 17:26:52.477634479 +0000 UTC m=+0.118014680 container init 1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.479 239460 INFO nova.compute.manager [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Took 6.29 seconds to build instance.#033[00m
Jan 29 12:26:52 np0005601226 podman[262695]: 2026-01-29 17:26:52.482883489 +0000 UTC m=+0.123263670 container start 1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:26:52 np0005601226 nova_compute[239456]: 2026-01-29 17:26:52.497 239460 DEBUG oslo_concurrency.lockutils [None req-cac83b62-7489-465a-a6cf-6735db41a86e 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:52 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [NOTICE]   (262714) : New worker (262716) forked
Jan 29 12:26:52 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [NOTICE]   (262714) : Loading success.
Jan 29 12:26:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 824 KiB/s rd, 2.1 MiB/s wr, 184 op/s
Jan 29 12:26:54 np0005601226 nova_compute[239456]: 2026-01-29 17:26:54.446 239460 DEBUG nova.compute.manager [req-ef3617f8-2755-45bb-a556-e19c6179365d req-65800dd5-80d2-450a-9074-1f3f8ab83c0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Received event network-vif-plugged-3824d6e0-3bd9-401d-b8dc-efb864f7d883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:54 np0005601226 nova_compute[239456]: 2026-01-29 17:26:54.447 239460 DEBUG oslo_concurrency.lockutils [req-ef3617f8-2755-45bb-a556-e19c6179365d req-65800dd5-80d2-450a-9074-1f3f8ab83c0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:26:54 np0005601226 nova_compute[239456]: 2026-01-29 17:26:54.448 239460 DEBUG oslo_concurrency.lockutils [req-ef3617f8-2755-45bb-a556-e19c6179365d req-65800dd5-80d2-450a-9074-1f3f8ab83c0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:26:54 np0005601226 nova_compute[239456]: 2026-01-29 17:26:54.448 239460 DEBUG oslo_concurrency.lockutils [req-ef3617f8-2755-45bb-a556-e19c6179365d req-65800dd5-80d2-450a-9074-1f3f8ab83c0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:26:54 np0005601226 nova_compute[239456]: 2026-01-29 17:26:54.448 239460 DEBUG nova.compute.manager [req-ef3617f8-2755-45bb-a556-e19c6179365d req-65800dd5-80d2-450a-9074-1f3f8ab83c0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] No waiting events found dispatching network-vif-plugged-3824d6e0-3bd9-401d-b8dc-efb864f7d883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:26:54 np0005601226 nova_compute[239456]: 2026-01-29 17:26:54.448 239460 WARNING nova.compute.manager [req-ef3617f8-2755-45bb-a556-e19c6179365d req-65800dd5-80d2-450a-9074-1f3f8ab83c0d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Received unexpected event network-vif-plugged-3824d6e0-3bd9-401d-b8dc-efb864f7d883 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:26:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Jan 29 12:26:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Jan 29 12:26:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Jan 29 12:26:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 244 op/s
Jan 29 12:26:56 np0005601226 nova_compute[239456]: 2026-01-29 17:26:56.207 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:26:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Jan 29 12:26:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Jan 29 12:26:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Jan 29 12:26:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:56Z|00142|binding|INFO|Releasing lport 89ac3ba8-bf4c-4fd5-a3a0-22b192c468ed from this chassis (sb_readonly=0)
Jan 29 12:26:56 np0005601226 NetworkManager[49020]: <info>  [1769707616.7897] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 29 12:26:56 np0005601226 NetworkManager[49020]: <info>  [1769707616.7904] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Jan 29 12:26:56 np0005601226 nova_compute[239456]: 2026-01-29 17:26:56.790 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:56 np0005601226 nova_compute[239456]: 2026-01-29 17:26:56.804 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:26:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:26:56Z|00143|binding|INFO|Releasing lport 89ac3ba8-bf4c-4fd5-a3a0-22b192c468ed from this chassis (sb_readonly=0)
Jan 29 12:26:57 np0005601226 nova_compute[239456]: 2026-01-29 17:26:57.336 239460 DEBUG nova.compute.manager [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Received event network-changed-3824d6e0-3bd9-401d-b8dc-efb864f7d883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:26:57 np0005601226 nova_compute[239456]: 2026-01-29 17:26:57.337 239460 DEBUG nova.compute.manager [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Refreshing instance network info cache due to event network-changed-3824d6e0-3bd9-401d-b8dc-efb864f7d883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:26:57 np0005601226 nova_compute[239456]: 2026-01-29 17:26:57.337 239460 DEBUG oslo_concurrency.lockutils [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:26:57 np0005601226 nova_compute[239456]: 2026-01-29 17:26:57.338 239460 DEBUG oslo_concurrency.lockutils [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:26:57 np0005601226 nova_compute[239456]: 2026-01-29 17:26:57.338 239460 DEBUG nova.network.neutron [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Refreshing network info cache for port 3824d6e0-3bd9-401d-b8dc-efb864f7d883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:26:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394295093' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394295093' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.2 MiB/s wr, 190 op/s
Jan 29 12:26:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:26:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1126498583' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:26:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:26:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1126498583' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:26:58 np0005601226 nova_compute[239456]: 2026-01-29 17:26:58.787 239460 DEBUG nova.network.neutron [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Updated VIF entry in instance network info cache for port 3824d6e0-3bd9-401d-b8dc-efb864f7d883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:26:58 np0005601226 nova_compute[239456]: 2026-01-29 17:26:58.789 239460 DEBUG nova.network.neutron [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Updating instance_info_cache with network_info: [{"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:26:58 np0005601226 nova_compute[239456]: 2026-01-29 17:26:58.808 239460 DEBUG oslo_concurrency.lockutils [req-0240afe8-5350-42ed-82d0-3ddeba8e945f req-bcc63391-60eb-403a-bd0b-f22dac60563a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:26:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 967 KiB/s wr, 158 op/s
Jan 29 12:27:00 np0005601226 nova_compute[239456]: 2026-01-29 17:27:00.390 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:01 np0005601226 nova_compute[239456]: 2026-01-29 17:27:01.209 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 285 KiB/s wr, 158 op/s
Jan 29 12:27:03 np0005601226 nova_compute[239456]: 2026-01-29 17:27:03.768 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 129 op/s
Jan 29 12:27:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:04Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:f6:93 10.100.0.4
Jan 29 12:27:04 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:04Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:f6:93 10.100.0.4
Jan 29 12:27:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 420 KiB/s rd, 2.5 MiB/s wr, 123 op/s
Jan 29 12:27:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:05.843 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:27:05 np0005601226 nova_compute[239456]: 2026-01-29 17:27:05.843 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:05.845 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:27:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2777677543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:06 np0005601226 nova_compute[239456]: 2026-01-29 17:27:06.212 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Jan 29 12:27:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Jan 29 12:27:06 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:27:06 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:27:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Jan 29 12:27:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Jan 29 12:27:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Jan 29 12:27:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Jan 29 12:27:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 517 KiB/s rd, 3.1 MiB/s wr, 146 op/s
Jan 29 12:27:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:07.848 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:07 np0005601226 nova_compute[239456]: 2026-01-29 17:27:07.970 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Jan 29 12:27:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Jan 29 12:27:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Jan 29 12:27:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 524 KiB/s rd, 2.6 MiB/s wr, 149 op/s
Jan 29 12:27:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:27:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:27:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:27:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:27:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:27:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:27:11 np0005601226 nova_compute[239456]: 2026-01-29 17:27:11.213 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Jan 29 12:27:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Jan 29 12:27:11 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Jan 29 12:27:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 195 KiB/s rd, 136 KiB/s wr, 84 op/s
Jan 29 12:27:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2484304046' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2484304046' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:12 np0005601226 nova_compute[239456]: 2026-01-29 17:27:12.367 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.499 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.499 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.500 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.500 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.500 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.502 239460 INFO nova.compute.manager [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Terminating instance#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.504 239460 DEBUG nova.compute.manager [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:27:13 np0005601226 kernel: tap3824d6e0-3b (unregistering): left promiscuous mode
Jan 29 12:27:13 np0005601226 NetworkManager[49020]: <info>  [1769707633.5494] device (tap3824d6e0-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.557 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:13Z|00144|binding|INFO|Releasing lport 3824d6e0-3bd9-401d-b8dc-efb864f7d883 from this chassis (sb_readonly=0)
Jan 29 12:27:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:13Z|00145|binding|INFO|Setting lport 3824d6e0-3bd9-401d-b8dc-efb864f7d883 down in Southbound
Jan 29 12:27:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:13Z|00146|binding|INFO|Removing iface tap3824d6e0-3b ovn-installed in OVS
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.562 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.570 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.574 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:f6:93 10.100.0.4'], port_security=['fa:16:3e:6f:f6:93 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ff8fb3ad-e9ef-400e-9283-ac1884d5aa67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7a05385-9eda-4947-9882-d897c0742e88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7e911c19f694429a9441fe3c0072af6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09948a85-bec7-4ae0-b35e-b1e8d7e00825', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a2f3ab6-9316-43c8-ba3d-949e97002105, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3824d6e0-3bd9-401d-b8dc-efb864f7d883) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.575 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3824d6e0-3bd9-401d-b8dc-efb864f7d883 in datapath b7a05385-9eda-4947-9882-d897c0742e88 unbound from our chassis#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.576 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b7a05385-9eda-4947-9882-d897c0742e88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.577 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[04f541d7-1d29-4552-aaac-9e2e7fa03e84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.578 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88 namespace which is not needed anymore#033[00m
Jan 29 12:27:13 np0005601226 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 29 12:27:13 np0005601226 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 12.442s CPU time.
Jan 29 12:27:13 np0005601226 systemd-machined[207561]: Machine qemu-14-instance-0000000e terminated.
Jan 29 12:27:13 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [NOTICE]   (262714) : haproxy version is 2.8.14-c23fe91
Jan 29 12:27:13 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [NOTICE]   (262714) : path to executable is /usr/sbin/haproxy
Jan 29 12:27:13 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [WARNING]  (262714) : Exiting Master process...
Jan 29 12:27:13 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [WARNING]  (262714) : Exiting Master process...
Jan 29 12:27:13 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [ALERT]    (262714) : Current worker (262716) exited with code 143 (Terminated)
Jan 29 12:27:13 np0005601226 neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88[262710]: [WARNING]  (262714) : All workers exited. Exiting... (0)
Jan 29 12:27:13 np0005601226 systemd[1]: libpod-1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a.scope: Deactivated successfully.
Jan 29 12:27:13 np0005601226 podman[262752]: 2026-01-29 17:27:13.71259659 +0000 UTC m=+0.042332223 container died 1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.726 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.729 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.739 239460 INFO nova.virt.libvirt.driver [-] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Instance destroyed successfully.#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.740 239460 DEBUG nova.objects.instance [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lazy-loading 'resources' on Instance uuid ff8fb3ad-e9ef-400e-9283-ac1884d5aa67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:27:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a-userdata-shm.mount: Deactivated successfully.
Jan 29 12:27:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-72b01f94bbe251feb096538ea3f24492ae8766c3a19c9aaf599327ee4a95fe17-merged.mount: Deactivated successfully.
Jan 29 12:27:13 np0005601226 podman[262752]: 2026-01-29 17:27:13.763813961 +0000 UTC m=+0.093549584 container cleanup 1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:27:13 np0005601226 systemd[1]: libpod-conmon-1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a.scope: Deactivated successfully.
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.788 239460 DEBUG nova.virt.libvirt.vif [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-892882788',display_name='tempest-instance-892882788',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-892882788',id=14,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHkN2iRP+g4jjJil8MXCTgLiHQht7Kr8GTkJER/vAu0Qvlo3hFW1+EBjRCA47cgRQxBRHZQms5dh3rJRoVscSuAtRwxGDdj7RqfPp4xxGzV/036ZyGtMvyD9f6jcqlTQg==',key_name='tempest-keypair-863984616',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:26:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7e911c19f694429a9441fe3c0072af6',ramdisk_id='',reservation_id='r-jpof4br8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-624255791',owner_user_name='tempest-VolumesBackupsTest-624255791-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:26:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='00a09ccd681d42068127585c610c2bba',uuid=ff8fb3ad-e9ef-400e-9283-ac1884d5aa67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.789 239460 DEBUG nova.network.os_vif_util [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Converting VIF {"id": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "address": "fa:16:3e:6f:f6:93", "network": {"id": "b7a05385-9eda-4947-9882-d897c0742e88", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-408896947-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7e911c19f694429a9441fe3c0072af6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3824d6e0-3b", "ovs_interfaceid": "3824d6e0-3bd9-401d-b8dc-efb864f7d883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.790 239460 DEBUG nova.network.os_vif_util [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:f6:93,bridge_name='br-int',has_traffic_filtering=True,id=3824d6e0-3bd9-401d-b8dc-efb864f7d883,network=Network(b7a05385-9eda-4947-9882-d897c0742e88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3824d6e0-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.790 239460 DEBUG os_vif [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:f6:93,bridge_name='br-int',has_traffic_filtering=True,id=3824d6e0-3bd9-401d-b8dc-efb864f7d883,network=Network(b7a05385-9eda-4947-9882-d897c0742e88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3824d6e0-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.791 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.792 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3824d6e0-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.793 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.796 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.797 239460 INFO os_vif [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:f6:93,bridge_name='br-int',has_traffic_filtering=True,id=3824d6e0-3bd9-401d-b8dc-efb864f7d883,network=Network(b7a05385-9eda-4947-9882-d897c0742e88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3824d6e0-3b')#033[00m
Jan 29 12:27:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 190 KiB/s rd, 117 KiB/s wr, 101 op/s
Jan 29 12:27:13 np0005601226 podman[262790]: 2026-01-29 17:27:13.830400422 +0000 UTC m=+0.043787412 container remove 1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.835 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[901ada40-a76d-4e56-b6f2-4ad2c62af707]: (4, ('Thu Jan 29 05:27:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88 (1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a)\n1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a\nThu Jan 29 05:27:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88 (1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a)\n1bdb6a9eb5cefda3c89e126a4ed7f27840853fac6aadc9842a06175a7f14c98a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.836 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5f44fd80-4a69-4939-8012-a8426a9bee68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.837 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7a05385-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:13 np0005601226 kernel: tapb7a05385-90: left promiscuous mode
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.839 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 nova_compute[239456]: 2026-01-29 17:27:13.848 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.850 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c58399b7-58e0-4db5-b64f-04bedfdfe4ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.863 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad2f32-cc61-43b4-a2dc-fba7f928dcd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.864 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[17ee2d68-d14f-4cd4-aede-e597c4a00582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.877 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a4900809-216b-4028-b2c7-702b77d79a59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496544, 'reachable_time': 16866, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262823, 'error': None, 'target': 'ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.879 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b7a05385-9eda-4947-9882-d897c0742e88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:27:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:13.879 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[a92027a2-9b99-41a0-8c42-03b34e899fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:13 np0005601226 systemd[1]: run-netns-ovnmeta\x2db7a05385\x2d9eda\x2d4947\x2d9882\x2dd897c0742e88.mount: Deactivated successfully.
Jan 29 12:27:14 np0005601226 nova_compute[239456]: 2026-01-29 17:27:14.062 239460 INFO nova.virt.libvirt.driver [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Deleting instance files /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_del#033[00m
Jan 29 12:27:14 np0005601226 nova_compute[239456]: 2026-01-29 17:27:14.063 239460 INFO nova.virt.libvirt.driver [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Deletion of /var/lib/nova/instances/ff8fb3ad-e9ef-400e-9283-ac1884d5aa67_del complete#033[00m
Jan 29 12:27:14 np0005601226 nova_compute[239456]: 2026-01-29 17:27:14.236 239460 INFO nova.compute.manager [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:27:14 np0005601226 nova_compute[239456]: 2026-01-29 17:27:14.237 239460 DEBUG oslo.service.loopingcall [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:27:14 np0005601226 nova_compute[239456]: 2026-01-29 17:27:14.237 239460 DEBUG nova.compute.manager [-] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:27:14 np0005601226 nova_compute[239456]: 2026-01-29 17:27:14.237 239460 DEBUG nova.network.neutron [-] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:27:15 np0005601226 nova_compute[239456]: 2026-01-29 17:27:15.768 239460 DEBUG nova.network.neutron [-] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:27:15 np0005601226 nova_compute[239456]: 2026-01-29 17:27:15.791 239460 INFO nova.compute.manager [-] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Took 1.55 seconds to deallocate network for instance.#033[00m
Jan 29 12:27:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 184 KiB/s rd, 110 KiB/s wr, 128 op/s
Jan 29 12:27:15 np0005601226 nova_compute[239456]: 2026-01-29 17:27:15.955 239460 DEBUG nova.compute.manager [req-c7b5c1c2-c5ab-46a7-97ff-c734a6e00466 req-fc4e637b-3b88-4045-a742-3fecb5a2130b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Received event network-vif-deleted-3824d6e0-3bd9-401d-b8dc-efb864f7d883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.105 239460 INFO nova.compute.manager [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.141 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.142 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.188 239460 DEBUG oslo_concurrency.processutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.215 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:27:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951187363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.699 239460 DEBUG oslo_concurrency.processutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.704 239460 DEBUG nova.compute.provider_tree [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.719 239460 DEBUG nova.scheduler.client.report [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.737 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.763 239460 INFO nova.scheduler.client.report [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Deleted allocations for instance ff8fb3ad-e9ef-400e-9283-ac1884d5aa67#033[00m
Jan 29 12:27:16 np0005601226 nova_compute[239456]: 2026-01-29 17:27:16.814 239460 DEBUG oslo_concurrency.lockutils [None req-eb090cce-c13f-415a-a916-f10072e11ab3 00a09ccd681d42068127585c610c2bba b7e911c19f694429a9441fe3c0072af6 - - default default] Lock "ff8fb3ad-e9ef-400e-9283-ac1884d5aa67" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 66 KiB/s rd, 22 KiB/s wr, 90 op/s
Jan 29 12:27:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537831546' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:18 np0005601226 nova_compute[239456]: 2026-01-29 17:27:18.795 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Jan 29 12:27:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Jan 29 12:27:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Jan 29 12:27:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 72 KiB/s rd, 21 KiB/s wr, 95 op/s
Jan 29 12:27:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Jan 29 12:27:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Jan 29 12:27:19 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Jan 29 12:27:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2142819935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Jan 29 12:27:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Jan 29 12:27:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Jan 29 12:27:21 np0005601226 nova_compute[239456]: 2026-01-29 17:27:21.218 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Jan 29 12:27:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Jan 29 12:27:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Jan 29 12:27:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 95 op/s
Jan 29 12:27:21 np0005601226 podman[262847]: 2026-01-29 17:27:21.877706662 +0000 UTC m=+0.045100918 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:27:21 np0005601226 podman[262848]: 2026-01-29 17:27:21.911003083 +0000 UTC m=+0.077374652 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 29 12:27:23 np0005601226 nova_compute[239456]: 2026-01-29 17:27:23.798 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.7 MiB/s wr, 87 op/s
Jan 29 12:27:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Jan 29 12:27:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Jan 29 12:27:24 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Jan 29 12:27:24 np0005601226 nova_compute[239456]: 2026-01-29 17:27:24.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3620351936' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:25 np0005601226 nova_compute[239456]: 2026-01-29 17:27:25.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 204 op/s
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.220 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.510 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.511 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.534 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:27:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Jan 29 12:27:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Jan 29 12:27:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.616 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.616 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.631 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.632 239460 INFO nova.compute.claims [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:27:26 np0005601226 nova_compute[239456]: 2026-01-29 17:27:26.735 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:27:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1233788296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.248 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.253 239460 DEBUG nova.compute.provider_tree [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.269 239460 DEBUG nova.scheduler.client.report [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.341 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.342 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.392 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.392 239460 DEBUG nova.network.neutron [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.415 239460 INFO nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.433 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.480 239460 INFO nova.virt.block_device [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Booting with volume eb13b481-d6d6-4ca2-b09d-4589f76d6297 at /dev/vda#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.535 239460 DEBUG nova.policy [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3901089a059c4bdb8d0497398873d2f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '420f46ae230d4c529afe366a1b780921', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.602 239460 DEBUG os_brick.utils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.603 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Jan 29 12:27:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.615 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.615 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0bd4f1-4e65-46cc-801d-ad607dba0c37]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.617 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.626 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.626 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c7862d-39ed-4430-8f53-cefd6a726e8f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.629 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.639 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.640 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[06e7291f-503f-4613-be20-ff1b5b2b1747]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.642 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe16684-8fa0-4e42-be70-c15565ffc688]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.643 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.667 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.670 239460 DEBUG os_brick.initiator.connectors.lightos [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.671 239460 DEBUG os_brick.initiator.connectors.lightos [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.671 239460 DEBUG os_brick.initiator.connectors.lightos [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.672 239460 DEBUG os_brick.utils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:27:27 np0005601226 nova_compute[239456]: 2026-01-29 17:27:27.673 239460 DEBUG nova.virt.block_device [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Updating existing volume attachment record: 18256251-d243-4da9-a693-529a495ac357 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:27:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 169 op/s
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.200 239460 DEBUG nova.network.neutron [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Successfully created port: 745e2a89-d4b3-4291-892f-274bfa197449 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:27:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206625357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.739 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707633.737578, ff8fb3ad-e9ef-400e-9283-ac1884d5aa67 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.739 239460 INFO nova.compute.manager [-] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.764 239460 DEBUG nova.compute.manager [None req-763474ce-d8ae-45bc-bf24-660eef3c5868 - - - - - -] [instance: ff8fb3ad-e9ef-400e-9283-ac1884d5aa67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.801 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.910 239460 DEBUG nova.network.neutron [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Successfully updated port: 745e2a89-d4b3-4291-892f-274bfa197449 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.927 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "refresh_cache-12438fc6-4f98-42dc-a5df-a9d18dd066b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.927 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquired lock "refresh_cache-12438fc6-4f98-42dc-a5df-a9d18dd066b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:27:28 np0005601226 nova_compute[239456]: 2026-01-29 17:27:28.928 239460 DEBUG nova.network.neutron [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.013 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.016 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.016 239460 INFO nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Creating image(s)#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.017 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.017 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Ensure instance console log exists: /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.018 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.019 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.019 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.037 239460 DEBUG nova.compute.manager [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received event network-changed-745e2a89-d4b3-4291-892f-274bfa197449 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.037 239460 DEBUG nova.compute.manager [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Refreshing instance network info cache due to event network-changed-745e2a89-d4b3-4291-892f-274bfa197449. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.037 239460 DEBUG oslo_concurrency.lockutils [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-12438fc6-4f98-42dc-a5df-a9d18dd066b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:27:29 np0005601226 nova_compute[239456]: 2026-01-29 17:27:29.080 239460 DEBUG nova.network.neutron [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:27:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1657838492' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Jan 29 12:27:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Jan 29 12:27:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Jan 29 12:27:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 68 KiB/s rd, 7.3 KiB/s wr, 102 op/s
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.182 239460 DEBUG nova.network.neutron [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Updating instance_info_cache with network_info: [{"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.204 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Releasing lock "refresh_cache-12438fc6-4f98-42dc-a5df-a9d18dd066b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.204 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Instance network_info: |[{"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.205 239460 DEBUG oslo_concurrency.lockutils [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-12438fc6-4f98-42dc-a5df-a9d18dd066b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.205 239460 DEBUG nova.network.neutron [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Refreshing network info cache for port 745e2a89-d4b3-4291-892f-274bfa197449 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.210 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Start _get_guest_xml network_info=[{"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '18256251-d243-4da9-a693-529a495ac357', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eb13b481-d6d6-4ca2-b09d-4589f76d6297', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eb13b481-d6d6-4ca2-b09d-4589f76d6297', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '12438fc6-4f98-42dc-a5df-a9d18dd066b7', 'attached_at': '', 'detached_at': '', 'volume_id': 'eb13b481-d6d6-4ca2-b09d-4589f76d6297', 'serial': 'eb13b481-d6d6-4ca2-b09d-4589f76d6297'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.215 239460 WARNING nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.220 239460 DEBUG nova.virt.libvirt.host [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.220 239460 DEBUG nova.virt.libvirt.host [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.224 239460 DEBUG nova.virt.libvirt.host [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.224 239460 DEBUG nova.virt.libvirt.host [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3074868182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.225 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3074868182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.225 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.226 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.226 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.227 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.227 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.227 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.228 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.228 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.228 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.229 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.229 239460 DEBUG nova.virt.hardware [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.433 239460 DEBUG nova.storage.rbd_utils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 12438fc6-4f98-42dc-a5df-a9d18dd066b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.438 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3385151959' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3385151959' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.619 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.620 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.620 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.620 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.654 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.655 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.655 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.655 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:27:30 np0005601226 nova_compute[239456]: 2026-01-29 17:27:30.656 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755772543' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.007 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.138 239460 DEBUG os_brick.encryptors [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Using volume encryption metadata '{'encryption_key_id': '0479e55a-f1cd-42f9-b9b2-acc06f7af9c8', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eb13b481-d6d6-4ca2-b09d-4589f76d6297', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eb13b481-d6d6-4ca2-b09d-4589f76d6297', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '12438fc6-4f98-42dc-a5df-a9d18dd066b7', 'attached_at': '', 'detached_at': '', 'volume_id': 'eb13b481-d6d6-4ca2-b09d-4589f76d6297', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.142 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.162 239460 DEBUG barbicanclient.v1.secrets [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.163 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.205 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.207 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.223 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.231 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.232 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.251 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.252 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:27:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070036548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.272 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.273 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.288 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.290 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.291 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.309 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.309 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.327 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.327 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.348 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.349 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.385 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.385 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.419 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.419 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.438 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.438 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.454 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.454 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4409MB free_disk=59.98823523428291GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.455 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.455 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.456 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.456 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.473 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.474 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.492 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.492 239460 INFO barbicanclient.base [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Calculated Secrets uuid ref: secrets/0479e55a-f1cd-42f9-b9b2-acc06f7af9c8#033[00m
Jan 29 12:27:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.511 239460 DEBUG barbicanclient.client [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.512 239460 DEBUG nova.virt.libvirt.host [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <volume>eb13b481-d6d6-4ca2-b09d-4589f76d6297</volume>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:27:31 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:27:31 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.520 239460 DEBUG nova.network.neutron [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Updated VIF entry in instance network info cache for port 745e2a89-d4b3-4291-892f-274bfa197449. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.521 239460 DEBUG nova.network.neutron [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Updating instance_info_cache with network_info: [{"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.540 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 12438fc6-4f98-42dc-a5df-a9d18dd066b7 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.540 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.541 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:27:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.570 239460 DEBUG oslo_concurrency.lockutils [req-8b7ce7bb-5834-451a-aace-a242d8311f65 req-6d9eb997-0f20-46d9-883f-48cd184743d0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-12438fc6-4f98-42dc-a5df-a9d18dd066b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:27:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.601 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.652 239460 DEBUG nova.virt.libvirt.vif [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-248434708',display_name='tempest-TestVolumeBootPattern-server-248434708',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-248434708',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-nr3o4xph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:27:27Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=12438fc6-4f98-42dc-a5df-a9d18dd066b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.653 239460 DEBUG nova.network.os_vif_util [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.654 239460 DEBUG nova.network.os_vif_util [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:cd:83,bridge_name='br-int',has_traffic_filtering=True,id=745e2a89-d4b3-4291-892f-274bfa197449,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap745e2a89-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.655 239460 DEBUG nova.objects.instance [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12438fc6-4f98-42dc-a5df-a9d18dd066b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.704 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <uuid>12438fc6-4f98-42dc-a5df-a9d18dd066b7</uuid>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <name>instance-0000000f</name>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBootPattern-server-248434708</nova:name>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:27:30</nova:creationTime>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:user uuid="3901089a059c4bdb8d0497398873d2f1">tempest-TestVolumeBootPattern-1871389491-project-member</nova:user>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:project uuid="420f46ae230d4c529afe366a1b780921">tempest-TestVolumeBootPattern-1871389491</nova:project>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <nova:port uuid="745e2a89-d4b3-4291-892f-274bfa197449">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <entry name="serial">12438fc6-4f98-42dc-a5df-a9d18dd066b7</entry>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <entry name="uuid">12438fc6-4f98-42dc-a5df-a9d18dd066b7</entry>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/12438fc6-4f98-42dc-a5df-a9d18dd066b7_disk.config">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-eb13b481-d6d6-4ca2-b09d-4589f76d6297">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <serial>eb13b481-d6d6-4ca2-b09d-4589f76d6297</serial>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <encryption format="luks">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:        <secret type="passphrase" uuid="354a4e16-473d-468e-ae5a-1122735a3e02"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      </encryption>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:7b:cd:83"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <target dev="tap745e2a89-d4"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/console.log" append="off"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:27:31 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:27:31 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:27:31 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:27:31 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.705 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Preparing to wait for external event network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.705 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.705 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.706 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.706 239460 DEBUG nova.virt.libvirt.vif [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-248434708',display_name='tempest-TestVolumeBootPattern-server-248434708',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-248434708',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-nr3o4xph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:27:27Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=12438fc6-4f98-42dc-a5df-a9d18dd066b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.706 239460 DEBUG nova.network.os_vif_util [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.707 239460 DEBUG nova.network.os_vif_util [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:cd:83,bridge_name='br-int',has_traffic_filtering=True,id=745e2a89-d4b3-4291-892f-274bfa197449,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap745e2a89-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.707 239460 DEBUG os_vif [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:cd:83,bridge_name='br-int',has_traffic_filtering=True,id=745e2a89-d4b3-4291-892f-274bfa197449,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap745e2a89-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.708 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.708 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.709 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.711 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.711 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap745e2a89-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.712 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap745e2a89-d4, col_values=(('external_ids', {'iface-id': '745e2a89-d4b3-4291-892f-274bfa197449', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:cd:83', 'vm-uuid': '12438fc6-4f98-42dc-a5df-a9d18dd066b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.713 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:31 np0005601226 NetworkManager[49020]: <info>  [1769707651.7144] manager: (tap745e2a89-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.716 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.717 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.718 239460 INFO os_vif [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:cd:83,bridge_name='br-int',has_traffic_filtering=True,id=745e2a89-d4b3-4291-892f-274bfa197449,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap745e2a89-d4')#033[00m
Jan 29 12:27:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 101 KiB/s rd, 5.4 MiB/s wr, 156 op/s
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.838 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.840 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.840 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No VIF found with MAC fa:16:3e:7b:cd:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.842 239460 INFO nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Using config drive#033[00m
Jan 29 12:27:31 np0005601226 nova_compute[239456]: 2026-01-29 17:27:31.877 239460 DEBUG nova.storage.rbd_utils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 12438fc6-4f98-42dc-a5df-a9d18dd066b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:27:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:27:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197885823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.188 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.193 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.263 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.328 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.328 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.575 239460 INFO nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Creating config drive at /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/disk.config#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.578 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqh2_9nxy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.698 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqh2_9nxy" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.717 239460 DEBUG nova.storage.rbd_utils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 12438fc6-4f98-42dc-a5df-a9d18dd066b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:27:32 np0005601226 nova_compute[239456]: 2026-01-29 17:27:32.720 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/disk.config 12438fc6-4f98-42dc-a5df-a9d18dd066b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:27:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2980358564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:27:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 108 KiB/s rd, 19 MiB/s wr, 164 op/s
Jan 29 12:27:33 np0005601226 nova_compute[239456]: 2026-01-29 17:27:33.922 239460 DEBUG oslo_concurrency.processutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/disk.config 12438fc6-4f98-42dc-a5df-a9d18dd066b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:33 np0005601226 nova_compute[239456]: 2026-01-29 17:27:33.922 239460 INFO nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Deleting local config drive /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7/disk.config because it was imported into RBD.#033[00m
Jan 29 12:27:33 np0005601226 kernel: tap745e2a89-d4: entered promiscuous mode
Jan 29 12:27:33 np0005601226 NetworkManager[49020]: <info>  [1769707653.9742] manager: (tap745e2a89-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Jan 29 12:27:33 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:33Z|00147|binding|INFO|Claiming lport 745e2a89-d4b3-4291-892f-274bfa197449 for this chassis.
Jan 29 12:27:33 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:33Z|00148|binding|INFO|745e2a89-d4b3-4291-892f-274bfa197449: Claiming fa:16:3e:7b:cd:83 10.100.0.9
Jan 29 12:27:33 np0005601226 nova_compute[239456]: 2026-01-29 17:27:33.974 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:33 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:33Z|00149|binding|INFO|Setting lport 745e2a89-d4b3-4291-892f-274bfa197449 ovn-installed in OVS
Jan 29 12:27:33 np0005601226 nova_compute[239456]: 2026-01-29 17:27:33.993 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:33 np0005601226 nova_compute[239456]: 2026-01-29 17:27:33.996 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:33 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:33.998 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:cd:83 10.100.0.9'], port_security=['fa:16:3e:7b:cd:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '12438fc6-4f98-42dc-a5df-a9d18dd066b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9be82e42-3d47-49cf-9a44-d003a5c81174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=745e2a89-d4b3-4291-892f-274bfa197449) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:27:33 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:33.999 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 745e2a89-d4b3-4291-892f-274bfa197449 in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b bound to our chassis#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.000 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:27:34 np0005601226 systemd-udevd[263079]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:27:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:34Z|00150|binding|INFO|Setting lport 745e2a89-d4b3-4291-892f-274bfa197449 up in Southbound
Jan 29 12:27:34 np0005601226 NetworkManager[49020]: <info>  [1769707654.0137] device (tap745e2a89-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.011 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1e2d43-7a34-4f9b-98fc-283a387db1a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.011 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c08c304-21 in ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.014 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c08c304-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.014 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[33daaa67-1937-452c-97bc-da0022c880c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 NetworkManager[49020]: <info>  [1769707654.0145] device (tap745e2a89-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.015 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[16ef21ec-c38d-4445-a67a-afe6aaa9d586]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 systemd-machined[207561]: New machine qemu-15-instance-0000000f.
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.028 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe7c461-9d42-498a-8c6f-c9ac9a02651a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.044 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[055f884e-ee7f-444f-857b-4069535840be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.074 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b3c45c-d534-46b4-9de5-ee9dadd4e965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 NetworkManager[49020]: <info>  [1769707654.0803] manager: (tap3c08c304-20): new Veth device (/org/freedesktop/NetworkManager/Devices/83)
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.081 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe8b1ed-f919-4ad3-a004-18d58e2f36ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.113 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[c996218e-eebf-48c2-9fce-83fea5906eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.117 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[3a90c701-319e-49e5-b310-1de5b0be25b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 NetworkManager[49020]: <info>  [1769707654.1384] device (tap3c08c304-20): carrier: link connected
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.144 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[86c3e889-8438-41ff-9afb-70ddaf441137]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.161 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5f9805-5cd8-4033-abeb-c29ce36a4ea2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500764, 'reachable_time': 19926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263115, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.174 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f793fe0a-0d24-445d-9fea-02ddc7793990]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:51ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500764, 'tstamp': 500764}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263116, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.187 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[70305fb5-3642-4ce7-ab46-434cbfe5677c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500764, 'reachable_time': 19926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263117, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.219 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6b5bba83-2da2-407a-ac9b-2993802523b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.266 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9c6ca2-ea6c-4f58-814e-9468ff66e718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.267 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.267 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.268 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:34 np0005601226 kernel: tap3c08c304-20: entered promiscuous mode
Jan 29 12:27:34 np0005601226 NetworkManager[49020]: <info>  [1769707654.2717] manager: (tap3c08c304-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.272 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.275 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.276 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:34Z|00151|binding|INFO|Releasing lport 4f9b16f1-6965-486d-bc02-ab1e4969963e from this chassis (sb_readonly=0)
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.278 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.279 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.280 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6a4878-8628-4b50-a173-2a2fae467a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.280 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:27:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:34.281 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'env', 'PROCESS_TAG=haproxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c08c304-2b32-4b44-ac2b-279bb8b2403b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.291 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.313 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.314 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.314 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.407 239460 DEBUG nova.compute.manager [req-689bc73c-0c43-4d7c-b2cc-9abe6b7c2d8f req-f500e3b6-fad2-4ecc-a8d3-698682176e97 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received event network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.408 239460 DEBUG oslo_concurrency.lockutils [req-689bc73c-0c43-4d7c-b2cc-9abe6b7c2d8f req-f500e3b6-fad2-4ecc-a8d3-698682176e97 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.409 239460 DEBUG oslo_concurrency.lockutils [req-689bc73c-0c43-4d7c-b2cc-9abe6b7c2d8f req-f500e3b6-fad2-4ecc-a8d3-698682176e97 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.409 239460 DEBUG oslo_concurrency.lockutils [req-689bc73c-0c43-4d7c-b2cc-9abe6b7c2d8f req-f500e3b6-fad2-4ecc-a8d3-698682176e97 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.410 239460 DEBUG nova.compute.manager [req-689bc73c-0c43-4d7c-b2cc-9abe6b7c2d8f req-f500e3b6-fad2-4ecc-a8d3-698682176e97 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Processing event network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:27:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.622 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 29 12:27:34 np0005601226 nova_compute[239456]: 2026-01-29 17:27:34.622 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:27:34 np0005601226 podman[263166]: 2026-01-29 17:27:34.668508015 +0000 UTC m=+0.020409488 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:27:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Jan 29 12:27:35 np0005601226 podman[263166]: 2026-01-29 17:27:35.317575584 +0000 UTC m=+0.669477037 container create 29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 29 12:27:35 np0005601226 systemd[1]: Started libpod-conmon-29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb.scope.
Jan 29 12:27:35 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:27:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18993ae7616406d57f9bcf5c654856b685460d320bc4ffa9a073286b2f932647/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:35 np0005601226 podman[263166]: 2026-01-29 17:27:35.420643131 +0000 UTC m=+0.772544614 container init 29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 29 12:27:35 np0005601226 podman[263166]: 2026-01-29 17:27:35.427986247 +0000 UTC m=+0.779887710 container start 29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:27:35 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[263199]: [NOTICE]   (263203) : New worker (263205) forked
Jan 29 12:27:35 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[263199]: [NOTICE]   (263203) : Loading success.
Jan 29 12:27:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 189 KiB/s rd, 67 MiB/s wr, 296 op/s
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.232 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.479 239460 DEBUG nova.compute.manager [req-9d3dd06f-e808-4f7d-be89-b7db980216b3 req-f03e574f-7559-4494-a2e5-4de96716756f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received event network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.479 239460 DEBUG oslo_concurrency.lockutils [req-9d3dd06f-e808-4f7d-be89-b7db980216b3 req-f03e574f-7559-4494-a2e5-4de96716756f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.480 239460 DEBUG oslo_concurrency.lockutils [req-9d3dd06f-e808-4f7d-be89-b7db980216b3 req-f03e574f-7559-4494-a2e5-4de96716756f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.480 239460 DEBUG oslo_concurrency.lockutils [req-9d3dd06f-e808-4f7d-be89-b7db980216b3 req-f03e574f-7559-4494-a2e5-4de96716756f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.480 239460 DEBUG nova.compute.manager [req-9d3dd06f-e808-4f7d-be89-b7db980216b3 req-f03e574f-7559-4494-a2e5-4de96716756f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] No waiting events found dispatching network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.480 239460 WARNING nova.compute.manager [req-9d3dd06f-e808-4f7d-be89-b7db980216b3 req-f03e574f-7559-4494-a2e5-4de96716756f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received unexpected event network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 for instance with vm_state building and task_state spawning.#033[00m
Jan 29 12:27:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Jan 29 12:27:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Jan 29 12:27:36 np0005601226 nova_compute[239456]: 2026-01-29 17:27:36.713 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.249 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707657.249393, 12438fc6-4f98-42dc-a5df-a9d18dd066b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.250 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] VM Started (Lifecycle Event)#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.252 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.255 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.259 239460 INFO nova.virt.libvirt.driver [-] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Instance spawned successfully.#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.259 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.275 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.281 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.284 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.284 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.285 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.285 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.285 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.286 239460 DEBUG nova.virt.libvirt.driver [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.315 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.315 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707657.2515764, 12438fc6-4f98-42dc-a5df-a9d18dd066b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.315 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.351 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.354 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707657.254842, 12438fc6-4f98-42dc-a5df-a9d18dd066b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.354 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.371 239460 INFO nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Took 8.36 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.371 239460 DEBUG nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.391 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.393 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.444 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.463 239460 INFO nova.compute.manager [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Took 10.88 seconds to build instance.#033[00m
Jan 29 12:27:37 np0005601226 nova_compute[239456]: 2026-01-29 17:27:37.484 239460 DEBUG oslo_concurrency.lockutils [None req-7cd9ba30-850d-4c83-bef1-4fd42539cdef 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 123 KiB/s rd, 61 MiB/s wr, 198 op/s
Jan 29 12:27:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Jan 29 12:27:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Jan 29 12:27:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Jan 29 12:27:39 np0005601226 nova_compute[239456]: 2026-01-29 17:27:39.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 2.8 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 116 KiB/s rd, 92 MiB/s wr, 202 op/s
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.207 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.208 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.208 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.208 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.209 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.211 239460 INFO nova.compute.manager [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Terminating instance#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.213 239460 DEBUG nova.compute.manager [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:27:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3490970946' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.288 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3490970946' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.288 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.289 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:40 np0005601226 kernel: tap745e2a89-d4 (unregistering): left promiscuous mode
Jan 29 12:27:40 np0005601226 NetworkManager[49020]: <info>  [1769707660.3830] device (tap745e2a89-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.383 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:40Z|00152|binding|INFO|Releasing lport 745e2a89-d4b3-4291-892f-274bfa197449 from this chassis (sb_readonly=0)
Jan 29 12:27:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:40Z|00153|binding|INFO|Setting lport 745e2a89-d4b3-4291-892f-274bfa197449 down in Southbound
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.393 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:27:40Z|00154|binding|INFO|Removing iface tap745e2a89-d4 ovn-installed in OVS
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.398 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.409 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 29 12:27:40 np0005601226 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.160s CPU time.
Jan 29 12:27:40 np0005601226 systemd-machined[207561]: Machine qemu-15-instance-0000000f terminated.
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.560 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:cd:83 10.100.0.9'], port_security=['fa:16:3e:7b:cd:83 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '12438fc6-4f98-42dc-a5df-a9d18dd066b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9be82e42-3d47-49cf-9a44-d003a5c81174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=745e2a89-d4b3-4291-892f-274bfa197449) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.562 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 745e2a89-d4b3-4291-892f-274bfa197449 in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b unbound from our chassis#033[00m
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.564 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.565 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7c015713-b5a8-467d-8d26-47e84c743148]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:40.565 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace which is not needed anymore#033[00m
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:27:40
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'vms', 'backups', 'volumes', 'default.rgw.control']
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.631 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.634 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.649 239460 INFO nova.virt.libvirt.driver [-] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Instance destroyed successfully.#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.649 239460 DEBUG nova.objects.instance [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'resources' on Instance uuid 12438fc6-4f98-42dc-a5df-a9d18dd066b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:27:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.901 239460 DEBUG nova.virt.libvirt.vif [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:27:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-248434708',display_name='tempest-TestVolumeBootPattern-server-248434708',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-248434708',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:27:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-nr3o4xph',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:27:37Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=12438fc6-4f98-42dc-a5df-a9d18dd066b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.902 239460 DEBUG nova.network.os_vif_util [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "745e2a89-d4b3-4291-892f-274bfa197449", "address": "fa:16:3e:7b:cd:83", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap745e2a89-d4", "ovs_interfaceid": "745e2a89-d4b3-4291-892f-274bfa197449", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.903 239460 DEBUG nova.network.os_vif_util [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:cd:83,bridge_name='br-int',has_traffic_filtering=True,id=745e2a89-d4b3-4291-892f-274bfa197449,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap745e2a89-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.904 239460 DEBUG os_vif [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:cd:83,bridge_name='br-int',has_traffic_filtering=True,id=745e2a89-d4b3-4291-892f-274bfa197449,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap745e2a89-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.907 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.908 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap745e2a89-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.950 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:40 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[263199]: [NOTICE]   (263203) : haproxy version is 2.8.14-c23fe91
Jan 29 12:27:40 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[263199]: [NOTICE]   (263203) : path to executable is /usr/sbin/haproxy
Jan 29 12:27:40 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[263199]: [WARNING]  (263203) : Exiting Master process...
Jan 29 12:27:40 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[263199]: [ALERT]    (263203) : Current worker (263205) exited with code 143 (Terminated)
Jan 29 12:27:40 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[263199]: [WARNING]  (263203) : All workers exited. Exiting... (0)
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.953 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:27:40 np0005601226 systemd[1]: libpod-29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb.scope: Deactivated successfully.
Jan 29 12:27:40 np0005601226 nova_compute[239456]: 2026-01-29 17:27:40.956 239460 INFO os_vif [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:cd:83,bridge_name='br-int',has_traffic_filtering=True,id=745e2a89-d4b3-4291-892f-274bfa197449,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap745e2a89-d4')#033[00m
Jan 29 12:27:40 np0005601226 podman[263257]: 2026-01-29 17:27:40.962923209 +0000 UTC m=+0.296071704 container died 29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:27:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb-userdata-shm.mount: Deactivated successfully.
Jan 29 12:27:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay-18993ae7616406d57f9bcf5c654856b685460d320bc4ffa9a073286b2f932647-merged.mount: Deactivated successfully.
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.183 239460 DEBUG nova.compute.manager [req-c84de56e-507e-42da-a9e7-0847bf1f5202 req-b34abfbf-b386-40fe-a750-4e13d5865a93 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received event network-vif-unplugged-745e2a89-d4b3-4291-892f-274bfa197449 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.184 239460 DEBUG oslo_concurrency.lockutils [req-c84de56e-507e-42da-a9e7-0847bf1f5202 req-b34abfbf-b386-40fe-a750-4e13d5865a93 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.184 239460 DEBUG oslo_concurrency.lockutils [req-c84de56e-507e-42da-a9e7-0847bf1f5202 req-b34abfbf-b386-40fe-a750-4e13d5865a93 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.185 239460 DEBUG oslo_concurrency.lockutils [req-c84de56e-507e-42da-a9e7-0847bf1f5202 req-b34abfbf-b386-40fe-a750-4e13d5865a93 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.185 239460 DEBUG nova.compute.manager [req-c84de56e-507e-42da-a9e7-0847bf1f5202 req-b34abfbf-b386-40fe-a750-4e13d5865a93 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] No waiting events found dispatching network-vif-unplugged-745e2a89-d4b3-4291-892f-274bfa197449 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.185 239460 DEBUG nova.compute.manager [req-c84de56e-507e-42da-a9e7-0847bf1f5202 req-b34abfbf-b386-40fe-a750-4e13d5865a93 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received event network-vif-unplugged-745e2a89-d4b3-4291-892f-274bfa197449 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.234 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:41 np0005601226 podman[263257]: 2026-01-29 17:27:41.327628098 +0000 UTC m=+0.660776593 container cleanup 29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:27:41 np0005601226 systemd[1]: libpod-conmon-29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb.scope: Deactivated successfully.
Jan 29 12:27:41 np0005601226 podman[263303]: 2026-01-29 17:27:41.409680263 +0000 UTC m=+0.060673135 container remove 29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.414 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[460a1654-9bd1-4871-add7-e873445488fb]: (4, ('Thu Jan 29 05:27:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb)\n29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb\nThu Jan 29 05:27:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb)\n29f5555c5507254254d20a897e0959b355614df5f91963b42f926065327939bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.416 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f57d3563-388c-41be-aa23-0b3f605278dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.416 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.418 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:41 np0005601226 kernel: tap3c08c304-20: left promiscuous mode
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.427 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.430 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[042084ea-016d-465f-b280-2b1ff8cc685b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.456 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[df57f507-9361-4acc-96ce-325910a231c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.457 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e631d6-a3ee-44e2-bf12-e6efd845f692]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.469 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[aad7a930-30f5-44f0-89aa-2d1aeb69579b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500756, 'reachable_time': 16450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263318, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.471 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:27:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:27:41.471 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[6470f6be-122d-4939-a08c-b745b694cc51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:27:41 np0005601226 systemd[1]: run-netns-ovnmeta\x2d3c08c304\x2d2b32\x2d4b44\x2dac2b\x2d279bb8b2403b.mount: Deactivated successfully.
Jan 29 12:27:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Jan 29 12:27:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Jan 29 12:27:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.638 239460 INFO nova.virt.libvirt.driver [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Deleting instance files /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7_del#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.639 239460 INFO nova.virt.libvirt.driver [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Deletion of /var/lib/nova/instances/12438fc6-4f98-42dc-a5df-a9d18dd066b7_del complete#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.715 239460 INFO nova.compute.manager [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Took 1.50 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.716 239460 DEBUG oslo.service.loopingcall [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.716 239460 DEBUG nova.compute.manager [-] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:27:41 np0005601226 nova_compute[239456]: 2026-01-29 17:27:41.716 239460 DEBUG nova.network.neutron [-] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:27:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 3.0 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 179 KiB/s rd, 72 MiB/s wr, 227 op/s
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.224 239460 DEBUG nova.network.neutron [-] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.241 239460 INFO nova.compute.manager [-] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Took 0.52 seconds to deallocate network for instance.#033[00m
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.285 239460 DEBUG nova.compute.manager [req-ae8d0206-cd2b-42e7-968a-5a5aaa0855e2 req-b4bf2605-a77b-47a1-a7af-4efd6faebc58 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received event network-vif-deleted-745e2a89-d4b3-4291-892f-274bfa197449 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.369 239460 INFO nova.compute.manager [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Took 0.13 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.413 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.414 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.476 239460 DEBUG oslo_concurrency.processutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Jan 29 12:27:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Jan 29 12:27:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:42 np0005601226 nova_compute[239456]: 2026-01-29 17:27:42.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 29 12:27:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:27:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796481522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.010 239460 DEBUG oslo_concurrency.processutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.016 239460 DEBUG nova.compute.provider_tree [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.031 239460 DEBUG nova.scheduler.client.report [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.060 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.089 239460 INFO nova.scheduler.client.report [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Deleted allocations for instance 12438fc6-4f98-42dc-a5df-a9d18dd066b7#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.151 239460 DEBUG oslo_concurrency.lockutils [None req-13bbcbee-3934-42e4-87fe-375bdd0edd63 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.248 239460 DEBUG nova.compute.manager [req-144ded9f-75fc-4791-91fa-3c4f248c6920 req-3246828a-e844-4a1c-ac88-92199aad911b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received event network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.249 239460 DEBUG oslo_concurrency.lockutils [req-144ded9f-75fc-4791-91fa-3c4f248c6920 req-3246828a-e844-4a1c-ac88-92199aad911b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.249 239460 DEBUG oslo_concurrency.lockutils [req-144ded9f-75fc-4791-91fa-3c4f248c6920 req-3246828a-e844-4a1c-ac88-92199aad911b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.249 239460 DEBUG oslo_concurrency.lockutils [req-144ded9f-75fc-4791-91fa-3c4f248c6920 req-3246828a-e844-4a1c-ac88-92199aad911b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "12438fc6-4f98-42dc-a5df-a9d18dd066b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.250 239460 DEBUG nova.compute.manager [req-144ded9f-75fc-4791-91fa-3c4f248c6920 req-3246828a-e844-4a1c-ac88-92199aad911b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] No waiting events found dispatching network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:27:43 np0005601226 nova_compute[239456]: 2026-01-29 17:27:43.250 239460 WARNING nova.compute.manager [req-144ded9f-75fc-4791-91fa-3c4f248c6920 req-3246828a-e844-4a1c-ac88-92199aad911b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Received unexpected event network-vif-plugged-745e2a89-d4b3-4291-892f-274bfa197449 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:27:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Jan 29 12:27:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Jan 29 12:27:43 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Jan 29 12:27:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 2.8 GiB data, 3.0 GiB used, 57 GiB / 60 GiB avail; 257 KiB/s rd, 78 MiB/s wr, 342 op/s
Jan 29 12:27:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3832609113' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3832609113' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Jan 29 12:27:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Jan 29 12:27:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Jan 29 12:27:44 np0005601226 nova_compute[239456]: 2026-01-29 17:27:44.626 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:44 np0005601226 nova_compute[239456]: 2026-01-29 17:27:44.626 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:27:44 np0005601226 nova_compute[239456]: 2026-01-29 17:27:44.626 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 29 12:27:44 np0005601226 nova_compute[239456]: 2026-01-29 17:27:44.640 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612443349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612443349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Jan 29 12:27:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 284 KiB/s rd, 45 MiB/s wr, 463 op/s
Jan 29 12:27:45 np0005601226 nova_compute[239456]: 2026-01-29 17:27:45.951 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3696137309' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:45 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3696137309' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:46 np0005601226 nova_compute[239456]: 2026-01-29 17:27:46.235 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Jan 29 12:27:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Jan 29 12:27:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Jan 29 12:27:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Jan 29 12:27:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Jan 29 12:27:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Jan 29 12:27:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 210 KiB/s rd, 25 MiB/s wr, 339 op/s
Jan 29 12:27:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39139745' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:48 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39139745' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Jan 29 12:27:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Jan 29 12:27:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Jan 29 12:27:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 2.1 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 398 op/s
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:27:50 np0005601226 podman[263486]: 2026-01-29 17:27:50.538876494 +0000 UTC m=+0.042322553 container create f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noether, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:27:50 np0005601226 systemd[1]: Started libpod-conmon-f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945.scope.
Jan 29 12:27:50 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:27:50 np0005601226 podman[263486]: 2026-01-29 17:27:50.611155687 +0000 UTC m=+0.114601766 container init f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noether, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:27:50 np0005601226 podman[263486]: 2026-01-29 17:27:50.617719763 +0000 UTC m=+0.121165822 container start f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:27:50 np0005601226 podman[263486]: 2026-01-29 17:27:50.523033169 +0000 UTC m=+0.026479258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:27:50 np0005601226 podman[263486]: 2026-01-29 17:27:50.621790922 +0000 UTC m=+0.125237001 container attach f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:27:50 np0005601226 dazzling_noether[263502]: 167 167
Jan 29 12:27:50 np0005601226 systemd[1]: libpod-f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945.scope: Deactivated successfully.
Jan 29 12:27:50 np0005601226 podman[263486]: 2026-01-29 17:27:50.623043316 +0000 UTC m=+0.126489375 container died f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:27:50 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b8d1090e95707cdac68293b3692f39a93cc2003dac9a3eba70cb42533c10a116-merged.mount: Deactivated successfully.
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:27:50 np0005601226 podman[263486]: 2026-01-29 17:27:50.704762062 +0000 UTC m=+0.208208131 container remove f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_noether, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:27:50 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Jan 29 12:27:50 np0005601226 systemd[1]: libpod-conmon-f44850b740287af6500c75b3e9540b58b68f4ff10ddd41b9b84fd8b322704945.scope: Deactivated successfully.
Jan 29 12:27:50 np0005601226 podman[263528]: 2026-01-29 17:27:50.834233757 +0000 UTC m=+0.040337260 container create ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 12:27:50 np0005601226 systemd[1]: Started libpod-conmon-ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09.scope.
Jan 29 12:27:50 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:27:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05493c8f4ac88fc1ffda5addebb5d1008e8c0f1df837e9213fe360fccc75447f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05493c8f4ac88fc1ffda5addebb5d1008e8c0f1df837e9213fe360fccc75447f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05493c8f4ac88fc1ffda5addebb5d1008e8c0f1df837e9213fe360fccc75447f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05493c8f4ac88fc1ffda5addebb5d1008e8c0f1df837e9213fe360fccc75447f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:50 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05493c8f4ac88fc1ffda5addebb5d1008e8c0f1df837e9213fe360fccc75447f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:50 np0005601226 podman[263528]: 2026-01-29 17:27:50.818322111 +0000 UTC m=+0.024425634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:27:50 np0005601226 podman[263528]: 2026-01-29 17:27:50.921674247 +0000 UTC m=+0.127777780 container init ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dirac, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:27:50 np0005601226 podman[263528]: 2026-01-29 17:27:50.937172011 +0000 UTC m=+0.143275514 container start ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 12:27:50 np0005601226 podman[263528]: 2026-01-29 17:27:50.945208637 +0000 UTC m=+0.151312140 container attach ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:27:50 np0005601226 nova_compute[239456]: 2026-01-29 17:27:50.954 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:51 np0005601226 nova_compute[239456]: 2026-01-29 17:27:51.269 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270459709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4270459709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:51 np0005601226 laughing_dirac[263545]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:27:51 np0005601226 laughing_dirac[263545]: --> All data devices are unavailable
Jan 29 12:27:51 np0005601226 systemd[1]: libpod-ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09.scope: Deactivated successfully.
Jan 29 12:27:51 np0005601226 podman[263528]: 2026-01-29 17:27:51.451569657 +0000 UTC m=+0.657673150 container died ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:27:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-05493c8f4ac88fc1ffda5addebb5d1008e8c0f1df837e9213fe360fccc75447f-merged.mount: Deactivated successfully.
Jan 29 12:27:51 np0005601226 podman[263528]: 2026-01-29 17:27:51.509127707 +0000 UTC m=+0.715231200 container remove ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 29 12:27:51 np0005601226 systemd[1]: libpod-conmon-ff6fb90030e44dd53481a58741a99027fd5ed5b3da4da6af95e8c8a4ee432f09.scope: Deactivated successfully.
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 9.026589194584259e-07 of space, bias 1.0, pg target 0.0002707976758375278 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03399611885615793 of space, bias 1.0, pg target 10.198835656847379 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5887286561366176e-06 of space, bias 1.0, pg target 0.0007507313102796191 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671444298645524 of space, bias 1.0, pg target 0.1934718846607202 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4728638796226629e-06 of space, bias 4.0, pg target 0.0017085221003622889 quantized to 16 (current 16)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Jan 29 12:27:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Jan 29 12:27:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 2.1 GiB data, 2.3 GiB used, 58 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.9 KiB/s wr, 218 op/s
Jan 29 12:27:51 np0005601226 podman[263639]: 2026-01-29 17:27:51.889229338 +0000 UTC m=+0.019452112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:27:52 np0005601226 podman[263639]: 2026-01-29 17:27:52.268756414 +0000 UTC m=+0.398979188 container create 3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 29 12:27:52 np0005601226 systemd[1]: Started libpod-conmon-3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be.scope.
Jan 29 12:27:52 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:27:52 np0005601226 podman[263639]: 2026-01-29 17:27:52.327984019 +0000 UTC m=+0.458206803 container init 3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:27:52 np0005601226 podman[263639]: 2026-01-29 17:27:52.334162784 +0000 UTC m=+0.464385548 container start 3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 12:27:52 np0005601226 cool_diffie[263657]: 167 167
Jan 29 12:27:52 np0005601226 systemd[1]: libpod-3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be.scope: Deactivated successfully.
Jan 29 12:27:52 np0005601226 podman[263639]: 2026-01-29 17:27:52.340274497 +0000 UTC m=+0.470497551 container attach 3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:27:52 np0005601226 podman[263639]: 2026-01-29 17:27:52.340575086 +0000 UTC m=+0.470797850 container died 3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:27:52 np0005601226 podman[263653]: 2026-01-29 17:27:52.350654116 +0000 UTC m=+0.051640083 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 12:27:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-afe9c571886b09e53e644771a869b6e3cb4406c5e7cb8004257d4af5b9e65eba-merged.mount: Deactivated successfully.
Jan 29 12:27:52 np0005601226 podman[263639]: 2026-01-29 17:27:52.375805819 +0000 UTC m=+0.506028583 container remove 3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:27:52 np0005601226 systemd[1]: libpod-conmon-3e6df90f06c83ed47b52a4c20481d7f7f615840abc06a6da3b0c9d58754d50be.scope: Deactivated successfully.
Jan 29 12:27:52 np0005601226 podman[263656]: 2026-01-29 17:27:52.398541907 +0000 UTC m=+0.099195896 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:27:52 np0005601226 podman[263723]: 2026-01-29 17:27:52.495534182 +0000 UTC m=+0.038318696 container create 3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:27:52 np0005601226 systemd[1]: Started libpod-conmon-3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4.scope.
Jan 29 12:27:52 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:27:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1df298cdde0673951ed62de9960214a0657808e24fd056fbca937a96a4c667/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:52 np0005601226 podman[263723]: 2026-01-29 17:27:52.478047245 +0000 UTC m=+0.020831779 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:27:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1df298cdde0673951ed62de9960214a0657808e24fd056fbca937a96a4c667/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1df298cdde0673951ed62de9960214a0657808e24fd056fbca937a96a4c667/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:52 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1df298cdde0673951ed62de9960214a0657808e24fd056fbca937a96a4c667/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:52 np0005601226 podman[263723]: 2026-01-29 17:27:52.59335206 +0000 UTC m=+0.136136584 container init 3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:27:52 np0005601226 podman[263723]: 2026-01-29 17:27:52.598148229 +0000 UTC m=+0.140932743 container start 3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:27:52 np0005601226 podman[263723]: 2026-01-29 17:27:52.618674557 +0000 UTC m=+0.161459111 container attach 3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:27:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Jan 29 12:27:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Jan 29 12:27:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Jan 29 12:27:52 np0005601226 charming_beaver[263740]: {
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:    "0": [
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:        {
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "devices": [
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "/dev/loop3"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            ],
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_name": "ceph_lv0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_size": "21470642176",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "name": "ceph_lv0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "tags": {
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cluster_name": "ceph",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.crush_device_class": "",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.encrypted": "0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.objectstore": "bluestore",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osd_id": "0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.type": "block",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.vdo": "0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.with_tpm": "0"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            },
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "type": "block",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "vg_name": "ceph_vg0"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:        }
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:    ],
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:    "1": [
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:        {
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "devices": [
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "/dev/loop4"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            ],
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_name": "ceph_lv1",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_size": "21470642176",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "name": "ceph_lv1",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "tags": {
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cluster_name": "ceph",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.crush_device_class": "",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.encrypted": "0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.objectstore": "bluestore",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osd_id": "1",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.type": "block",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.vdo": "0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.with_tpm": "0"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            },
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "type": "block",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "vg_name": "ceph_vg1"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:        }
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:    ],
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:    "2": [
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:        {
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "devices": [
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "/dev/loop5"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            ],
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_name": "ceph_lv2",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_size": "21470642176",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "name": "ceph_lv2",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "tags": {
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.cluster_name": "ceph",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.crush_device_class": "",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.encrypted": "0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.objectstore": "bluestore",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osd_id": "2",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.type": "block",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.vdo": "0",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:                "ceph.with_tpm": "0"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            },
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "type": "block",
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:            "vg_name": "ceph_vg2"
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:        }
Jan 29 12:27:52 np0005601226 charming_beaver[263740]:    ]
Jan 29 12:27:52 np0005601226 charming_beaver[263740]: }
Jan 29 12:27:52 np0005601226 systemd[1]: libpod-3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4.scope: Deactivated successfully.
Jan 29 12:27:52 np0005601226 podman[263723]: 2026-01-29 17:27:52.879785005 +0000 UTC m=+0.422569519 container died 3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:27:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cd1df298cdde0673951ed62de9960214a0657808e24fd056fbca937a96a4c667-merged.mount: Deactivated successfully.
Jan 29 12:27:53 np0005601226 podman[263723]: 2026-01-29 17:27:53.01153316 +0000 UTC m=+0.554317674 container remove 3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:27:53 np0005601226 systemd[1]: libpod-conmon-3c4e661de95b96df1a7fd2d591ce5f3bad5309709df4029b61409ca634618aa4.scope: Deactivated successfully.
Jan 29 12:27:53 np0005601226 podman[263828]: 2026-01-29 17:27:53.448117323 +0000 UTC m=+0.042312844 container create 354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:27:53 np0005601226 systemd[1]: Started libpod-conmon-354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff.scope.
Jan 29 12:27:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:27:53 np0005601226 podman[263828]: 2026-01-29 17:27:53.517779307 +0000 UTC m=+0.111974848 container init 354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lederberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:27:53 np0005601226 podman[263828]: 2026-01-29 17:27:53.522648297 +0000 UTC m=+0.116843818 container start 354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lederberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:27:53 np0005601226 systemd[1]: libpod-354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff.scope: Deactivated successfully.
Jan 29 12:27:53 np0005601226 busy_lederberg[263844]: 167 167
Jan 29 12:27:53 np0005601226 conmon[263844]: conmon 354b28fab9265276250c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff.scope/container/memory.events
Jan 29 12:27:53 np0005601226 podman[263828]: 2026-01-29 17:27:53.432649049 +0000 UTC m=+0.026844600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:27:53 np0005601226 podman[263828]: 2026-01-29 17:27:53.532984384 +0000 UTC m=+0.127179935 container attach 354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lederberg, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:27:53 np0005601226 podman[263828]: 2026-01-29 17:27:53.533387024 +0000 UTC m=+0.127582545 container died 354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 12:27:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay-486ad5244cde8046224fbd2a0634c837231526e4150d107b93bf8dcec7514b91-merged.mount: Deactivated successfully.
Jan 29 12:27:53 np0005601226 podman[263828]: 2026-01-29 17:27:53.569959464 +0000 UTC m=+0.164154985 container remove 354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 12:27:53 np0005601226 systemd[1]: libpod-conmon-354b28fab9265276250c65d861f4f2515cf0d26b42ebcc32649e7fbb6615b3ff.scope: Deactivated successfully.
Jan 29 12:27:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:27:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1772666699' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:27:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:27:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1772666699' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:27:53 np0005601226 podman[263870]: 2026-01-29 17:27:53.702612902 +0000 UTC m=+0.048219360 container create 67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:27:53 np0005601226 systemd[1]: Started libpod-conmon-67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa.scope.
Jan 29 12:27:53 np0005601226 podman[263870]: 2026-01-29 17:27:53.679390092 +0000 UTC m=+0.024996650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:27:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:27:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d3fc686ad3a4b3aba2d74e52eb74f040ea79982ff3a978b7ec053f153f01b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d3fc686ad3a4b3aba2d74e52eb74f040ea79982ff3a978b7ec053f153f01b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d3fc686ad3a4b3aba2d74e52eb74f040ea79982ff3a978b7ec053f153f01b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:53 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d3fc686ad3a4b3aba2d74e52eb74f040ea79982ff3a978b7ec053f153f01b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:27:53 np0005601226 podman[263870]: 2026-01-29 17:27:53.814414424 +0000 UTC m=+0.160020972 container init 67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 12:27:53 np0005601226 podman[263870]: 2026-01-29 17:27:53.820407544 +0000 UTC m=+0.166014022 container start 67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 12:27:53 np0005601226 podman[263870]: 2026-01-29 17:27:53.823988641 +0000 UTC m=+0.169595169 container attach 67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:27:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Jan 29 12:27:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Jan 29 12:27:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 1.6 GiB data, 2.0 GiB used, 58 GiB / 60 GiB avail; 94 KiB/s rd, 877 KiB/s wr, 161 op/s
Jan 29 12:27:53 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Jan 29 12:27:54 np0005601226 lvm[263964]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:27:54 np0005601226 lvm[263965]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:27:54 np0005601226 lvm[263965]: VG ceph_vg1 finished
Jan 29 12:27:54 np0005601226 lvm[263964]: VG ceph_vg0 finished
Jan 29 12:27:54 np0005601226 lvm[263967]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:27:54 np0005601226 lvm[263967]: VG ceph_vg2 finished
Jan 29 12:27:54 np0005601226 upbeat_mahavira[263886]: {}
Jan 29 12:27:54 np0005601226 systemd[1]: libpod-67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa.scope: Deactivated successfully.
Jan 29 12:27:54 np0005601226 podman[263870]: 2026-01-29 17:27:54.558705971 +0000 UTC m=+0.904312439 container died 67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 29 12:27:54 np0005601226 systemd[1]: libpod-67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa.scope: Consumed 1.049s CPU time.
Jan 29 12:27:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-02d3fc686ad3a4b3aba2d74e52eb74f040ea79982ff3a978b7ec053f153f01b0-merged.mount: Deactivated successfully.
Jan 29 12:27:54 np0005601226 podman[263870]: 2026-01-29 17:27:54.599702258 +0000 UTC m=+0.945308726 container remove 67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:27:54 np0005601226 systemd[1]: libpod-conmon-67d9db68005ebdc936cce37a686e0f875408cf7b477fdc3b8c452b3c7a202cfa.scope: Deactivated successfully.
Jan 29 12:27:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:27:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:27:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:27:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:27:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:27:54 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:27:55 np0005601226 nova_compute[239456]: 2026-01-29 17:27:55.473 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:55 np0005601226 nova_compute[239456]: 2026-01-29 17:27:55.566 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:55 np0005601226 nova_compute[239456]: 2026-01-29 17:27:55.648 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707660.6476097, 12438fc6-4f98-42dc-a5df-a9d18dd066b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:27:55 np0005601226 nova_compute[239456]: 2026-01-29 17:27:55.649 239460 INFO nova.compute.manager [-] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:27:55 np0005601226 nova_compute[239456]: 2026-01-29 17:27:55.686 239460 DEBUG nova.compute.manager [None req-e81a115e-a25e-4068-a562-a790037b85ae - - - - - -] [instance: 12438fc6-4f98-42dc-a5df-a9d18dd066b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:27:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 137 KiB/s rd, 4.1 MiB/s wr, 298 op/s
Jan 29 12:27:55 np0005601226 nova_compute[239456]: 2026-01-29 17:27:55.957 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:56 np0005601226 nova_compute[239456]: 2026-01-29 17:27:56.270 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:27:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:27:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Jan 29 12:27:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Jan 29 12:27:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.682 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.682 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.698 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.775 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.776 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.781 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.781 239460 INFO nova.compute.claims [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:27:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 134 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 3.5 MiB/s wr, 247 op/s
Jan 29 12:27:57 np0005601226 nova_compute[239456]: 2026-01-29 17:27:57.883 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:27:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:27:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1290870729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.406 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.411 239460 DEBUG nova.compute.provider_tree [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.430 239460 DEBUG nova.scheduler.client.report [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.453 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.455 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.509 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.510 239460 DEBUG nova.network.neutron [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.527 239460 INFO nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.545 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:27:58 np0005601226 nova_compute[239456]: 2026-01-29 17:27:58.581 239460 INFO nova.virt.block_device [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Booting with volume snapshot b6d7ea63-b2e9-4d60-b2bc-b9c147f14392 at /dev/vda#033[00m
Jan 29 12:27:59 np0005601226 nova_compute[239456]: 2026-01-29 17:27:59.068 239460 DEBUG nova.policy [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3901089a059c4bdb8d0497398873d2f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '420f46ae230d4c529afe366a1b780921', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:27:59 np0005601226 nova_compute[239456]: 2026-01-29 17:27:59.561 239460 DEBUG nova.network.neutron [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Successfully created port: b450c7c7-dbc1-4971-97b1-228160e7866c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:27:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 134 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 3.0 MiB/s wr, 210 op/s
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.380 239460 DEBUG nova.network.neutron [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Successfully updated port: b450c7c7-dbc1-4971-97b1-228160e7866c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.397 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "refresh_cache-57fdc12b-f0f4-4f34-838a-f32c817ad266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.398 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquired lock "refresh_cache-57fdc12b-f0f4-4f34-838a-f32c817ad266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.398 239460 DEBUG nova.network.neutron [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.494 239460 DEBUG nova.compute.manager [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received event network-changed-b450c7c7-dbc1-4971-97b1-228160e7866c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.494 239460 DEBUG nova.compute.manager [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Refreshing instance network info cache due to event network-changed-b450c7c7-dbc1-4971-97b1-228160e7866c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.494 239460 DEBUG oslo_concurrency.lockutils [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-57fdc12b-f0f4-4f34-838a-f32c817ad266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.570 239460 DEBUG nova.network.neutron [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:28:00 np0005601226 nova_compute[239456]: 2026-01-29 17:28:00.959 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:01 np0005601226 nova_compute[239456]: 2026-01-29 17:28:01.274 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2847874250' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:01 np0005601226 nova_compute[239456]: 2026-01-29 17:28:01.364 239460 DEBUG nova.network.neutron [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Updating instance_info_cache with network_info: [{"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:28:01 np0005601226 nova_compute[239456]: 2026-01-29 17:28:01.417 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Releasing lock "refresh_cache-57fdc12b-f0f4-4f34-838a-f32c817ad266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:28:01 np0005601226 nova_compute[239456]: 2026-01-29 17:28:01.417 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Instance network_info: |[{"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:28:01 np0005601226 nova_compute[239456]: 2026-01-29 17:28:01.418 239460 DEBUG oslo_concurrency.lockutils [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-57fdc12b-f0f4-4f34-838a-f32c817ad266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:28:01 np0005601226 nova_compute[239456]: 2026-01-29 17:28:01.418 239460 DEBUG nova.network.neutron [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Refreshing network info cache for port b450c7c7-dbc1-4971-97b1-228160e7866c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:28:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 29 12:28:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Jan 29 12:28:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Jan 29 12:28:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Jan 29 12:28:02 np0005601226 nova_compute[239456]: 2026-01-29 17:28:02.471 239460 DEBUG nova.network.neutron [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Updated VIF entry in instance network info cache for port b450c7c7-dbc1-4971-97b1-228160e7866c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:28:02 np0005601226 nova_compute[239456]: 2026-01-29 17:28:02.471 239460 DEBUG nova.network.neutron [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Updating instance_info_cache with network_info: [{"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:28:02 np0005601226 nova_compute[239456]: 2026-01-29 17:28:02.489 239460 DEBUG oslo_concurrency.lockutils [req-ec4edf7b-55ac-4920-a86d-7ad11bd99ee0 req-56392a89-cb70-4751-98d1-bfad5143f04b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-57fdc12b-f0f4-4f34-838a-f32c817ad266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:28:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Jan 29 12:28:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Jan 29 12:28:03 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.254 239460 DEBUG os_brick.utils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.255 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.268 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.269 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9248f5-6e9e-4780-9227-2aa33145e2ce]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.270 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.278 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.278 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[43919c17-060c-479c-9801-12c5054cbaea]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.279 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.287 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.288 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa160e4-6609-4247-8763-b294c8769f99]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.289 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[23c47538-771e-4676-b2cb-be129ec94ca9]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.289 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.311 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.313 239460 DEBUG os_brick.initiator.connectors.lightos [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.313 239460 DEBUG os_brick.initiator.connectors.lightos [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.313 239460 DEBUG os_brick.initiator.connectors.lightos [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.314 239460 DEBUG os_brick.utils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:28:03 np0005601226 nova_compute[239456]: 2026-01-29 17:28:03.314 239460 DEBUG nova.virt.block_device [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Updating existing volume attachment record: c363acf2-aa68-46f1-822f-328cb449c62e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:28:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.1 KiB/s wr, 23 op/s
Jan 29 12:28:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3185820961' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.424 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.426 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.427 239460 INFO nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Creating image(s)#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.428 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.429 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Ensure instance console log exists: /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.429 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.430 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.431 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.435 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Start _get_guest_xml network_info=[{"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'c363acf2-aa68-46f1-822f-328cb449c62e', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bedc6c02-5528-41c8-b963-60b21ff40cee', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bedc6c02-5528-41c8-b963-60b21ff40cee', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '57fdc12b-f0f4-4f34-838a-f32c817ad266', 'attached_at': '', 'detached_at': '', 'volume_id': 'bedc6c02-5528-41c8-b963-60b21ff40cee', 'serial': 'bedc6c02-5528-41c8-b963-60b21ff40cee'}, 'delete_on_termination': True, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.441 239460 WARNING nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.446 239460 DEBUG nova.virt.libvirt.host [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.446 239460 DEBUG nova.virt.libvirt.host [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.449 239460 DEBUG nova.virt.libvirt.host [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.449 239460 DEBUG nova.virt.libvirt.host [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.450 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.450 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.451 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.451 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.452 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.452 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.452 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.452 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.453 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.453 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.453 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.454 239460 DEBUG nova.virt.hardware [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.479 239460 DEBUG nova.storage.rbd_utils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 57fdc12b-f0f4-4f34-838a-f32c817ad266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:28:04 np0005601226 nova_compute[239456]: 2026-01-29 17:28:04.483 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842090461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.051 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.074 239460 DEBUG nova.virt.libvirt.vif [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1479068012',display_name='tempest-TestVolumeBootPattern-server-1479068012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1479068012',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-ktbzd4he',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:27:58Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=57fdc12b-f0f4-4f34-838a-f32c817ad266,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.075 239460 DEBUG nova.network.os_vif_util [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.076 239460 DEBUG nova.network.os_vif_util [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:aa:d4,bridge_name='br-int',has_traffic_filtering=True,id=b450c7c7-dbc1-4971-97b1-228160e7866c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb450c7c7-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.077 239460 DEBUG nova.objects.instance [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'pci_devices' on Instance uuid 57fdc12b-f0f4-4f34-838a-f32c817ad266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.092 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <uuid>57fdc12b-f0f4-4f34-838a-f32c817ad266</uuid>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <name>instance-00000010</name>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBootPattern-server-1479068012</nova:name>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:28:04</nova:creationTime>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:user uuid="3901089a059c4bdb8d0497398873d2f1">tempest-TestVolumeBootPattern-1871389491-project-member</nova:user>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:project uuid="420f46ae230d4c529afe366a1b780921">tempest-TestVolumeBootPattern-1871389491</nova:project>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <nova:port uuid="b450c7c7-dbc1-4971-97b1-228160e7866c">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <entry name="serial">57fdc12b-f0f4-4f34-838a-f32c817ad266</entry>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <entry name="uuid">57fdc12b-f0f4-4f34-838a-f32c817ad266</entry>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/57fdc12b-f0f4-4f34-838a-f32c817ad266_disk.config">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-bedc6c02-5528-41c8-b963-60b21ff40cee">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <serial>bedc6c02-5528-41c8-b963-60b21ff40cee</serial>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:27:aa:d4"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <target dev="tapb450c7c7-db"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/console.log" append="off"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:28:05 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:28:05 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:28:05 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:28:05 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.093 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Preparing to wait for external event network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.093 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.093 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.094 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.095 239460 DEBUG nova.virt.libvirt.vif [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1479068012',display_name='tempest-TestVolumeBootPattern-server-1479068012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1479068012',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-ktbzd4he',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:27:58Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=57fdc12b-f0f4-4f34-838a-f32c817ad266,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.095 239460 DEBUG nova.network.os_vif_util [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.096 239460 DEBUG nova.network.os_vif_util [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:aa:d4,bridge_name='br-int',has_traffic_filtering=True,id=b450c7c7-dbc1-4971-97b1-228160e7866c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb450c7c7-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.096 239460 DEBUG os_vif [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:aa:d4,bridge_name='br-int',has_traffic_filtering=True,id=b450c7c7-dbc1-4971-97b1-228160e7866c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb450c7c7-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.097 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.097 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.098 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.101 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.101 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb450c7c7-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.101 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb450c7c7-db, col_values=(('external_ids', {'iface-id': 'b450c7c7-dbc1-4971-97b1-228160e7866c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:aa:d4', 'vm-uuid': '57fdc12b-f0f4-4f34-838a-f32c817ad266'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.103 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:05 np0005601226 NetworkManager[49020]: <info>  [1769707685.1048] manager: (tapb450c7c7-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.105 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.109 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.110 239460 INFO os_vif [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:aa:d4,bridge_name='br-int',has_traffic_filtering=True,id=b450c7c7-dbc1-4971-97b1-228160e7866c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb450c7c7-db')#033[00m
Jan 29 12:28:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Jan 29 12:28:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.164 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.165 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.166 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No VIF found with MAC fa:16:3e:27:aa:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.166 239460 INFO nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Using config drive#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.190 239460 DEBUG nova.storage.rbd_utils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 57fdc12b-f0f4-4f34-838a-f32c817ad266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.669 239460 INFO nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Creating config drive at /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/disk.config#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.672 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjv68ijb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.798 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjv68ijb" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.822 239460 DEBUG nova.storage.rbd_utils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 57fdc12b-f0f4-4f34-838a-f32c817ad266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.824 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/disk.config 57fdc12b-f0f4-4f34-838a-f32c817ad266_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 5.0 KiB/s wr, 57 op/s
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.936 239460 DEBUG oslo_concurrency.processutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/disk.config 57fdc12b-f0f4-4f34-838a-f32c817ad266_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.937 239460 INFO nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Deleting local config drive /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266/disk.config because it was imported into RBD.#033[00m
Jan 29 12:28:05 np0005601226 kernel: tapb450c7c7-db: entered promiscuous mode
Jan 29 12:28:05 np0005601226 NetworkManager[49020]: <info>  [1769707685.9793] manager: (tapb450c7c7-db): new Tun device (/org/freedesktop/NetworkManager/Devices/86)
Jan 29 12:28:05 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:05Z|00155|binding|INFO|Claiming lport b450c7c7-dbc1-4971-97b1-228160e7866c for this chassis.
Jan 29 12:28:05 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:05Z|00156|binding|INFO|b450c7c7-dbc1-4971-97b1-228160e7866c: Claiming fa:16:3e:27:aa:d4 10.100.0.11
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.981 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:05 np0005601226 nova_compute[239456]: 2026-01-29 17:28:05.983 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:05.994 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:aa:d4 10.100.0.11'], port_security=['fa:16:3e:27:aa:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '57fdc12b-f0f4-4f34-838a-f32c817ad266', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9be82e42-3d47-49cf-9a44-d003a5c81174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=b450c7c7-dbc1-4971-97b1-228160e7866c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:28:05 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:05.997 155625 INFO neutron.agent.ovn.metadata.agent [-] Port b450c7c7-dbc1-4971-97b1-228160e7866c in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b bound to our chassis#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.001 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.009 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.010 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f3998972-096e-4df2-8605-af9671a55871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.011 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c08c304-21 in ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:28:06 np0005601226 systemd-machined[207561]: New machine qemu-16-instance-00000010.
Jan 29 12:28:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:06Z|00157|binding|INFO|Setting lport b450c7c7-dbc1-4971-97b1-228160e7866c ovn-installed in OVS
Jan 29 12:28:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:06Z|00158|binding|INFO|Setting lport b450c7c7-dbc1-4971-97b1-228160e7866c up in Southbound
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.014 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.014 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c08c304-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.014 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc1e757-cbdf-45ed-bf80-943325bde25e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.017 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[10ee864c-ff12-4c10-8c87-746d2bc80192]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.024 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[ea8c6fc8-74cf-4943-9e3b-9dd304d40d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.033 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4d15f9-c85f-49ca-8405-48eae2f9756f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.056 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[575536bb-4906-4df5-acc2-32d1d6b6371b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 systemd-udevd[264156]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.060 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9f51ad-5b0c-4dab-bbf5-9ef0a91b6bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 NetworkManager[49020]: <info>  [1769707686.0612] manager: (tap3c08c304-20): new Veth device (/org/freedesktop/NetworkManager/Devices/87)
Jan 29 12:28:06 np0005601226 systemd-udevd[264161]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:28:06 np0005601226 NetworkManager[49020]: <info>  [1769707686.0711] device (tapb450c7c7-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:28:06 np0005601226 NetworkManager[49020]: <info>  [1769707686.0717] device (tapb450c7c7-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.086 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[2583ee4b-2ce8-4686-8a26-7d36f2dd37c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.089 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3df997-0dd7-4646-939b-ef586ae60f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 NetworkManager[49020]: <info>  [1769707686.1044] device (tap3c08c304-20): carrier: link connected
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.107 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[962cea6a-e685-45e9-b4e9-db66d22749d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.119 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5f69e0-771d-465e-ab23-dfd2e1b5f347]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503960, 'reachable_time': 35117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264183, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.132 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e6472e3f-2963-430d-b20c-e909d96ff3cd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:51ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 503960, 'tstamp': 503960}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264184, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.149 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc6b1f7-2c90-411a-be7e-f24e584fa78a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503960, 'reachable_time': 35117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264185, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.170 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2398b0c1-6cd8-4e4a-bd5c-fdb32bdfd6e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.216 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e129043f-b414-4c22-b316-1b8f609653ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.217 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.218 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.218 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:06 np0005601226 kernel: tap3c08c304-20: entered promiscuous mode
Jan 29 12:28:06 np0005601226 NetworkManager[49020]: <info>  [1769707686.2211] manager: (tap3c08c304-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.220 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.227 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.228 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:06Z|00159|binding|INFO|Releasing lport 4f9b16f1-6965-486d-bc02-ab1e4969963e from this chassis (sb_readonly=0)
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.234 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.235 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.236 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f1467c34-0a9e-498e-98d7-92b2eec773e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.237 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.237 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'env', 'PROCESS_TAG=haproxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c08c304-2b32-4b44-ac2b-279bb8b2403b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.275 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.345 239460 DEBUG nova.compute.manager [req-d2ec38e5-0ebb-43c2-b7f6-cc6953974c03 req-03f4fa9d-2f8b-4ca0-9547-e393774c98cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received event network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.345 239460 DEBUG oslo_concurrency.lockutils [req-d2ec38e5-0ebb-43c2-b7f6-cc6953974c03 req-03f4fa9d-2f8b-4ca0-9547-e393774c98cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.346 239460 DEBUG oslo_concurrency.lockutils [req-d2ec38e5-0ebb-43c2-b7f6-cc6953974c03 req-03f4fa9d-2f8b-4ca0-9547-e393774c98cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.346 239460 DEBUG oslo_concurrency.lockutils [req-d2ec38e5-0ebb-43c2-b7f6-cc6953974c03 req-03f4fa9d-2f8b-4ca0-9547-e393774c98cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.346 239460 DEBUG nova.compute.manager [req-d2ec38e5-0ebb-43c2-b7f6-cc6953974c03 req-03f4fa9d-2f8b-4ca0-9547-e393774c98cc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Processing event network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.394 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.394 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2987050801' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2987050801' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:06 np0005601226 podman[264255]: 2026-01-29 17:28:06.660447623 +0000 UTC m=+0.039864717 container create 22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 29 12:28:06 np0005601226 systemd[1]: Started libpod-conmon-22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261.scope.
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.701 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.703 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707686.7019005, 57fdc12b-f0f4-4f34-838a-f32c817ad266 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.703 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] VM Started (Lifecycle Event)#033[00m
Jan 29 12:28:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.706 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:28:06 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b69b32831732b9de47516b069fa129c94bcaee060f588fdac40f4fdb90a86d1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.709 239460 INFO nova.virt.libvirt.driver [-] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Instance spawned successfully.#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.709 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:28:06 np0005601226 podman[264255]: 2026-01-29 17:28:06.719131414 +0000 UTC m=+0.098548508 container init 22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:28:06 np0005601226 podman[264255]: 2026-01-29 17:28:06.723931152 +0000 UTC m=+0.103348246 container start 22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.726 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:06 np0005601226 podman[264255]: 2026-01-29 17:28:06.638791054 +0000 UTC m=+0.018208168 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.734 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.738 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.739 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.739 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.740 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.740 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:06 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264272]: [NOTICE]   (264276) : New worker (264278) forked
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.741 239460 DEBUG nova.virt.libvirt.driver [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:06 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264272]: [NOTICE]   (264276) : Loading success.
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.770 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.770 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707686.702418, 57fdc12b-f0f4-4f34-838a-f32c817ad266 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.770 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.771 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:28:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:06.772 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.797 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.800 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707686.7060814, 57fdc12b-f0f4-4f34-838a-f32c817ad266 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.800 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.806 239460 INFO nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Took 2.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.806 239460 DEBUG nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.815 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.817 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.848 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.869 239460 INFO nova.compute.manager [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Took 9.11 seconds to build instance.#033[00m
Jan 29 12:28:06 np0005601226 nova_compute[239456]: 2026-01-29 17:28:06.883 239460 DEBUG oslo_concurrency.lockutils [None req-2f276c8d-43aa-4585-9dbb-3a6aedc46a0b 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 4.8 KiB/s wr, 58 op/s
Jan 29 12:28:08 np0005601226 nova_compute[239456]: 2026-01-29 17:28:08.463 239460 DEBUG nova.compute.manager [req-3161ae70-f438-48e5-9301-1671aa559d4a req-4cc2a0c9-ff7c-4ed8-9bbe-52baddd497ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received event network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:08 np0005601226 nova_compute[239456]: 2026-01-29 17:28:08.463 239460 DEBUG oslo_concurrency.lockutils [req-3161ae70-f438-48e5-9301-1671aa559d4a req-4cc2a0c9-ff7c-4ed8-9bbe-52baddd497ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:08 np0005601226 nova_compute[239456]: 2026-01-29 17:28:08.463 239460 DEBUG oslo_concurrency.lockutils [req-3161ae70-f438-48e5-9301-1671aa559d4a req-4cc2a0c9-ff7c-4ed8-9bbe-52baddd497ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:08 np0005601226 nova_compute[239456]: 2026-01-29 17:28:08.464 239460 DEBUG oslo_concurrency.lockutils [req-3161ae70-f438-48e5-9301-1671aa559d4a req-4cc2a0c9-ff7c-4ed8-9bbe-52baddd497ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:08 np0005601226 nova_compute[239456]: 2026-01-29 17:28:08.464 239460 DEBUG nova.compute.manager [req-3161ae70-f438-48e5-9301-1671aa559d4a req-4cc2a0c9-ff7c-4ed8-9bbe-52baddd497ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] No waiting events found dispatching network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:28:08 np0005601226 nova_compute[239456]: 2026-01-29 17:28:08.464 239460 WARNING nova.compute.manager [req-3161ae70-f438-48e5-9301-1671aa559d4a req-4cc2a0c9-ff7c-4ed8-9bbe-52baddd497ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received unexpected event network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c for instance with vm_state active and task_state None.#033[00m
Jan 29 12:28:09 np0005601226 nova_compute[239456]: 2026-01-29 17:28:09.781 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:09 np0005601226 nova_compute[239456]: 2026-01-29 17:28:09.781 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:09 np0005601226 nova_compute[239456]: 2026-01-29 17:28:09.781 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:09 np0005601226 nova_compute[239456]: 2026-01-29 17:28:09.781 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:09 np0005601226 nova_compute[239456]: 2026-01-29 17:28:09.782 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:09 np0005601226 nova_compute[239456]: 2026-01-29 17:28:09.783 239460 INFO nova.compute.manager [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Terminating instance#033[00m
Jan 29 12:28:09 np0005601226 nova_compute[239456]: 2026-01-29 17:28:09.783 239460 DEBUG nova.compute.manager [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:28:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 27 KiB/s wr, 174 op/s
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.103 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 kernel: tapb450c7c7-db (unregistering): left promiscuous mode
Jan 29 12:28:10 np0005601226 NetworkManager[49020]: <info>  [1769707690.1436] device (tapb450c7c7-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:28:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:10Z|00160|binding|INFO|Releasing lport b450c7c7-dbc1-4971-97b1-228160e7866c from this chassis (sb_readonly=0)
Jan 29 12:28:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:10Z|00161|binding|INFO|Setting lport b450c7c7-dbc1-4971-97b1-228160e7866c down in Southbound
Jan 29 12:28:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:10Z|00162|binding|INFO|Removing iface tapb450c7c7-db ovn-installed in OVS
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.151 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.157 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:aa:d4 10.100.0.11'], port_security=['fa:16:3e:27:aa:d4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '57fdc12b-f0f4-4f34-838a-f32c817ad266', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9be82e42-3d47-49cf-9a44-d003a5c81174', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=b450c7c7-dbc1-4971-97b1-228160e7866c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.158 155625 INFO neutron.agent.ovn.metadata.agent [-] Port b450c7c7-dbc1-4971-97b1-228160e7866c in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b unbound from our chassis#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.160 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.161 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1cf1c4-92e0-433e-92d2-0d1ab57bef6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.161 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace which is not needed anymore#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.169 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 29 12:28:10 np0005601226 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.847s CPU time.
Jan 29 12:28:10 np0005601226 systemd-machined[207561]: Machine qemu-16-instance-00000010 terminated.
Jan 29 12:28:10 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264272]: [NOTICE]   (264276) : haproxy version is 2.8.14-c23fe91
Jan 29 12:28:10 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264272]: [NOTICE]   (264276) : path to executable is /usr/sbin/haproxy
Jan 29 12:28:10 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264272]: [WARNING]  (264276) : Exiting Master process...
Jan 29 12:28:10 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264272]: [ALERT]    (264276) : Current worker (264278) exited with code 143 (Terminated)
Jan 29 12:28:10 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264272]: [WARNING]  (264276) : All workers exited. Exiting... (0)
Jan 29 12:28:10 np0005601226 systemd[1]: libpod-22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261.scope: Deactivated successfully.
Jan 29 12:28:10 np0005601226 podman[264312]: 2026-01-29 17:28:10.291466976 +0000 UTC m=+0.050755069 container died 22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:28:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261-userdata-shm.mount: Deactivated successfully.
Jan 29 12:28:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b69b32831732b9de47516b069fa129c94bcaee060f588fdac40f4fdb90a86d1e-merged.mount: Deactivated successfully.
Jan 29 12:28:10 np0005601226 podman[264312]: 2026-01-29 17:28:10.330881671 +0000 UTC m=+0.090169784 container cleanup 22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 29 12:28:10 np0005601226 systemd[1]: libpod-conmon-22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261.scope: Deactivated successfully.
Jan 29 12:28:10 np0005601226 podman[264343]: 2026-01-29 17:28:10.381097164 +0000 UTC m=+0.036947429 container remove 22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.386 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bf97c52a-e9af-48af-a447-e9533ad18456]: (4, ('Thu Jan 29 05:28:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261)\n22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261\nThu Jan 29 05:28:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261)\n22db0818c25179ef4cdf5c9677510c1e6d853ccd9e5af65c4e1a570982053261\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.387 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ee25647f-cd95-4128-ba51-e222b654c8c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.388 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.389 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 kernel: tap3c08c304-20: left promiscuous mode
Jan 29 12:28:10 np0005601226 NetworkManager[49020]: <info>  [1769707690.3988] manager: (tapb450c7c7-db): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.399 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.400 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.402 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a24cc82e-5f3a-4cba-97cd-cdc841ddc93f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.411 239460 INFO nova.virt.libvirt.driver [-] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Instance destroyed successfully.#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.411 239460 DEBUG nova.objects.instance [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'resources' on Instance uuid 57fdc12b-f0f4-4f34-838a-f32c817ad266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.423 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[072ae898-efd4-49d5-bfd5-ca4caa5d84f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.425 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7f566276-a46c-4374-b0fc-14b65e405b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.429 239460 DEBUG nova.virt.libvirt.vif [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1479068012',display_name='tempest-TestVolumeBootPattern-server-1479068012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1479068012',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:28:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-ktbzd4he',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:28:06Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=57fdc12b-f0f4-4f34-838a-f32c817ad266,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.430 239460 DEBUG nova.network.os_vif_util [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "b450c7c7-dbc1-4971-97b1-228160e7866c", "address": "fa:16:3e:27:aa:d4", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb450c7c7-db", "ovs_interfaceid": "b450c7c7-dbc1-4971-97b1-228160e7866c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.431 239460 DEBUG nova.network.os_vif_util [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:aa:d4,bridge_name='br-int',has_traffic_filtering=True,id=b450c7c7-dbc1-4971-97b1-228160e7866c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb450c7c7-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.431 239460 DEBUG os_vif [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:aa:d4,bridge_name='br-int',has_traffic_filtering=True,id=b450c7c7-dbc1-4971-97b1-228160e7866c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb450c7c7-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.433 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.433 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb450c7c7-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.436 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.438 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.438 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a67b32a8-b79d-45fa-8d3c-75a752de23e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 503955, 'reachable_time': 25613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264371, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.440 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:28:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:10.440 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[22c7299c-7ba0-4bfb-bea1-0f335b39f369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:10 np0005601226 systemd[1]: run-netns-ovnmeta\x2d3c08c304\x2d2b32\x2d4b44\x2dac2b\x2d279bb8b2403b.mount: Deactivated successfully.
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.440 239460 INFO os_vif [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:aa:d4,bridge_name='br-int',has_traffic_filtering=True,id=b450c7c7-dbc1-4971-97b1-228160e7866c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb450c7c7-db')#033[00m
Jan 29 12:28:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:28:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:28:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:28:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:28:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:28:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.589 239460 INFO nova.virt.libvirt.driver [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Deleting instance files /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266_del#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.590 239460 INFO nova.virt.libvirt.driver [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Deletion of /var/lib/nova/instances/57fdc12b-f0f4-4f34-838a-f32c817ad266_del complete#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.647 239460 INFO nova.compute.manager [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.648 239460 DEBUG oslo.service.loopingcall [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.649 239460 DEBUG nova.compute.manager [-] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.649 239460 DEBUG nova.network.neutron [-] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.698 239460 DEBUG nova.compute.manager [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received event network-vif-unplugged-b450c7c7-dbc1-4971-97b1-228160e7866c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.698 239460 DEBUG oslo_concurrency.lockutils [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.698 239460 DEBUG oslo_concurrency.lockutils [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.698 239460 DEBUG oslo_concurrency.lockutils [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.699 239460 DEBUG nova.compute.manager [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] No waiting events found dispatching network-vif-unplugged-b450c7c7-dbc1-4971-97b1-228160e7866c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.699 239460 DEBUG nova.compute.manager [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received event network-vif-unplugged-b450c7c7-dbc1-4971-97b1-228160e7866c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.699 239460 DEBUG nova.compute.manager [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received event network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.699 239460 DEBUG oslo_concurrency.lockutils [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.699 239460 DEBUG oslo_concurrency.lockutils [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.699 239460 DEBUG oslo_concurrency.lockutils [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.700 239460 DEBUG nova.compute.manager [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] No waiting events found dispatching network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:28:10 np0005601226 nova_compute[239456]: 2026-01-29 17:28:10.700 239460 WARNING nova.compute.manager [req-01eb1204-6cdb-4e7b-a3d7-66a817c2e90a req-32fd8f0d-cf69-430a-907a-addc7fa1afbb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received unexpected event network-vif-plugged-b450c7c7-dbc1-4971-97b1-228160e7866c for instance with vm_state active and task_state deleting.#033[00m
Jan 29 12:28:11 np0005601226 nova_compute[239456]: 2026-01-29 17:28:11.277 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:11 np0005601226 nova_compute[239456]: 2026-01-29 17:28:11.502 239460 DEBUG nova.network.neutron [-] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:28:11 np0005601226 nova_compute[239456]: 2026-01-29 17:28:11.528 239460 INFO nova.compute.manager [-] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Took 0.88 seconds to deallocate network for instance.#033[00m
Jan 29 12:28:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Jan 29 12:28:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 195 op/s
Jan 29 12:28:11 np0005601226 nova_compute[239456]: 2026-01-29 17:28:11.861 239460 INFO nova.compute.manager [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Took 0.33 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:28:11 np0005601226 nova_compute[239456]: 2026-01-29 17:28:11.863 239460 DEBUG nova.compute.manager [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Deleting volume: bedc6c02-5528-41c8-b963-60b21ff40cee _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 29 12:28:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Jan 29 12:28:11 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.038 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.038 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.084 239460 DEBUG oslo_concurrency.processutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.822 239460 DEBUG nova.compute.manager [req-d5bc4d0b-807c-425c-adc4-03923777f004 req-3033a9ac-237a-45f5-9ff5-a7a11f9c147d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Received event network-vif-deleted-b450c7c7-dbc1-4971-97b1-228160e7866c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:28:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3260724223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.881 239460 DEBUG oslo_concurrency.processutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.797s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.888 239460 DEBUG nova.compute.provider_tree [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.907 239460 DEBUG nova.scheduler.client.report [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.935 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:12 np0005601226 nova_compute[239456]: 2026-01-29 17:28:12.960 239460 INFO nova.scheduler.client.report [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Deleted allocations for instance 57fdc12b-f0f4-4f34-838a-f32c817ad266#033[00m
Jan 29 12:28:13 np0005601226 nova_compute[239456]: 2026-01-29 17:28:13.039 239460 DEBUG oslo_concurrency.lockutils [None req-d859f223-c3dc-44bc-8fca-ff7c5ca942bc 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "57fdc12b-f0f4-4f34-838a-f32c817ad266" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 181 op/s
Jan 29 12:28:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3027071839' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288134332' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1288134332' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:15 np0005601226 nova_compute[239456]: 2026-01-29 17:28:15.435 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 167 op/s
Jan 29 12:28:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Jan 29 12:28:16 np0005601226 nova_compute[239456]: 2026-01-29 17:28:16.279 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Jan 29 12:28:16 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Jan 29 12:28:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Jan 29 12:28:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Jan 29 12:28:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1183352443' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1183352443' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Jan 29 12:28:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Jan 29 12:28:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2135449779' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 134 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 6.3 KiB/s wr, 142 op/s
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2913992604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2913992604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:20 np0005601226 nova_compute[239456]: 2026-01-29 17:28:20.438 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374221644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374221644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Jan 29 12:28:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Jan 29 12:28:21 np0005601226 nova_compute[239456]: 2026-01-29 17:28:21.281 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Jan 29 12:28:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Jan 29 12:28:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Jan 29 12:28:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 118 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 8.0 KiB/s wr, 173 op/s
Jan 29 12:28:22 np0005601226 podman[264413]: 2026-01-29 17:28:22.875781993 +0000 UTC m=+0.047289227 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 29 12:28:22 np0005601226 podman[264414]: 2026-01-29 17:28:22.909008132 +0000 UTC m=+0.080234598 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:28:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Jan 29 12:28:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Jan 29 12:28:23 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Jan 29 12:28:23 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 29 12:28:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 110 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 7.6 KiB/s wr, 151 op/s
Jan 29 12:28:25 np0005601226 nova_compute[239456]: 2026-01-29 17:28:25.410 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707690.4085424, 57fdc12b-f0f4-4f34-838a-f32c817ad266 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:28:25 np0005601226 nova_compute[239456]: 2026-01-29 17:28:25.410 239460 INFO nova.compute.manager [-] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:28:25 np0005601226 nova_compute[239456]: 2026-01-29 17:28:25.439 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:25 np0005601226 nova_compute[239456]: 2026-01-29 17:28:25.448 239460 DEBUG nova.compute.manager [None req-749d420c-f0f3-4a45-a533-3b342dd53047 - - - - - -] [instance: 57fdc12b-f0f4-4f34-838a-f32c817ad266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174046386' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.3 KiB/s wr, 126 op/s
Jan 29 12:28:26 np0005601226 nova_compute[239456]: 2026-01-29 17:28:26.281 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Jan 29 12:28:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Jan 29 12:28:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Jan 29 12:28:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 88 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.5 KiB/s wr, 98 op/s
Jan 29 12:28:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Jan 29 12:28:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Jan 29 12:28:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Jan 29 12:28:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 107 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 1.3 MiB/s wr, 137 op/s
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1755894564' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1755894564' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:30 np0005601226 nova_compute[239456]: 2026-01-29 17:28:30.439 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/907516011' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/907516011' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:30 np0005601226 nova_compute[239456]: 2026-01-29 17:28:30.618 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:30 np0005601226 nova_compute[239456]: 2026-01-29 17:28:30.618 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:28:31 np0005601226 nova_compute[239456]: 2026-01-29 17:28:31.326 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:31 np0005601226 nova_compute[239456]: 2026-01-29 17:28:31.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Jan 29 12:28:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 3.5 MiB/s wr, 126 op/s
Jan 29 12:28:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Jan 29 12:28:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Jan 29 12:28:32 np0005601226 nova_compute[239456]: 2026-01-29 17:28:32.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:32 np0005601226 nova_compute[239456]: 2026-01-29 17:28:32.631 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:32 np0005601226 nova_compute[239456]: 2026-01-29 17:28:32.631 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:32 np0005601226 nova_compute[239456]: 2026-01-29 17:28:32.631 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:32 np0005601226 nova_compute[239456]: 2026-01-29 17:28:32.631 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:28:32 np0005601226 nova_compute[239456]: 2026-01-29 17:28:32.631 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1419954862' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:28:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273864794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.153 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.273 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.274 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4457MB free_disk=59.988225592300296GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.274 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.275 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.426 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.427 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.490 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 3.6 MiB/s wr, 162 op/s
Jan 29 12:28:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Jan 29 12:28:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Jan 29 12:28:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.888 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "60a233ad-302a-45ea-a78c-31ff4f06919e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.889 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:33 np0005601226 nova_compute[239456]: 2026-01-29 17:28:33.909 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:28:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:28:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851694494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.040 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.043 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.047 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.061 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.084 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.084 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.085 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.092 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.093 239460 INFO nova.compute.claims [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.204 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:28:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181804320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.753 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.759 239460 DEBUG nova.compute.provider_tree [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.782 239460 DEBUG nova.scheduler.client.report [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.818 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.819 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:28:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Jan 29 12:28:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.881 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.882 239460 DEBUG nova.network.neutron [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:28:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.904 239460 INFO nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:28:34 np0005601226 nova_compute[239456]: 2026-01-29 17:28:34.922 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.086 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.086 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.087 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.163 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.163 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.163 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.164 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.166 239460 INFO nova.virt.block_device [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Booting with volume c7d61ea6-ae5a-4894-8166-55238a1d384e at /dev/vda#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.271 239460 DEBUG nova.policy [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3901089a059c4bdb8d0497398873d2f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '420f46ae230d4c529afe366a1b780921', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.348 239460 DEBUG os_brick.utils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.349 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.359 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.359 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[252e4cc5-d9c2-4847-848d-b8542857bd86]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.361 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.367 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.367 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[88c10460-e547-4d26-acf1-645104523809]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.369 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.376 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.376 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[a084d861-f9ea-4da8-90e6-a53c3921aec6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.377 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[41bed86a-195d-4b6e-a7e0-05c73ced2ad0]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.378 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.397 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.400 239460 DEBUG os_brick.initiator.connectors.lightos [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.401 239460 DEBUG os_brick.initiator.connectors.lightos [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.401 239460 DEBUG os_brick.initiator.connectors.lightos [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.402 239460 DEBUG os_brick.utils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] <== get_connector_properties: return (53ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.402 239460 DEBUG nova.virt.block_device [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updating existing volume attachment record: bdf9ed78-74f7-41d3-9314-2e28f76e9321 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.441 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:35 np0005601226 nova_compute[239456]: 2026-01-29 17:28:35.676 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 2.3 MiB/s wr, 139 op/s
Jan 29 12:28:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1652189531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.321 239460 DEBUG nova.network.neutron [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Successfully created port: e169cad0-27e6-4099-aed1-80994ec6b573 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.374 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.543 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.545 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.546 239460 INFO nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Creating image(s)#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.546 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.547 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Ensure instance console log exists: /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.547 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.547 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.548 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.845 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.864 239460 WARNING nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.864 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Triggering sync for uuid 60a233ad-302a-45ea-a78c-31ff4f06919e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 29 12:28:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Jan 29 12:28:36 np0005601226 nova_compute[239456]: 2026-01-29 17:28:36.865 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "60a233ad-302a-45ea-a78c-31ff4f06919e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Jan 29 12:28:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2267461157' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2267461157' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.2 KiB/s wr, 54 op/s
Jan 29 12:28:37 np0005601226 nova_compute[239456]: 2026-01-29 17:28:37.972 239460 DEBUG nova.network.neutron [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Successfully updated port: e169cad0-27e6-4099-aed1-80994ec6b573 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:28:37 np0005601226 nova_compute[239456]: 2026-01-29 17:28:37.988 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:28:37 np0005601226 nova_compute[239456]: 2026-01-29 17:28:37.988 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquired lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:28:37 np0005601226 nova_compute[239456]: 2026-01-29 17:28:37.988 239460 DEBUG nova.network.neutron [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:28:38 np0005601226 nova_compute[239456]: 2026-01-29 17:28:38.114 239460 DEBUG nova.compute.manager [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Received event network-changed-e169cad0-27e6-4099-aed1-80994ec6b573 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:38 np0005601226 nova_compute[239456]: 2026-01-29 17:28:38.114 239460 DEBUG nova.compute.manager [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Refreshing instance network info cache due to event network-changed-e169cad0-27e6-4099-aed1-80994ec6b573. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:28:38 np0005601226 nova_compute[239456]: 2026-01-29 17:28:38.114 239460 DEBUG oslo_concurrency.lockutils [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:28:38 np0005601226 nova_compute[239456]: 2026-01-29 17:28:38.208 239460 DEBUG nova.network.neutron [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.120 239460 DEBUG nova.network.neutron [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updating instance_info_cache with network_info: [{"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.142 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Releasing lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.143 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Instance network_info: |[{"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.143 239460 DEBUG oslo_concurrency.lockutils [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.144 239460 DEBUG nova.network.neutron [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Refreshing network info cache for port e169cad0-27e6-4099-aed1-80994ec6b573 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.146 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Start _get_guest_xml network_info=[{"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'bdf9ed78-74f7-41d3-9314-2e28f76e9321', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c7d61ea6-ae5a-4894-8166-55238a1d384e', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c7d61ea6-ae5a-4894-8166-55238a1d384e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '60a233ad-302a-45ea-a78c-31ff4f06919e', 'attached_at': '', 'detached_at': '', 'volume_id': 'c7d61ea6-ae5a-4894-8166-55238a1d384e', 'serial': 'c7d61ea6-ae5a-4894-8166-55238a1d384e'}, 'delete_on_termination': True, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.150 239460 WARNING nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.154 239460 DEBUG nova.virt.libvirt.host [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.154 239460 DEBUG nova.virt.libvirt.host [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.160 239460 DEBUG nova.virt.libvirt.host [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.161 239460 DEBUG nova.virt.libvirt.host [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.161 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.161 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.162 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.162 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.162 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.162 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.162 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.163 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.163 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.163 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.163 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.163 239460 DEBUG nova.virt.hardware [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.185 239460 DEBUG nova.storage.rbd_utils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 60a233ad-302a-45ea-a78c-31ff4f06919e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.188 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.624 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/118484057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.678 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.703 239460 DEBUG nova.virt.libvirt.vif [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1518611244',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1518611244',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1518611244',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKySA50EaxzQB5p6K+5RoO+1u58vRcRzzkaFVlh7AgCu5iz7hwJw5cRUXS90xOqapy/lUThdOxCeLtsZuFMFUACxxtFu0BK2G+J6wGByeMurwKrEgC8uCS+2N5LgLkKS8Q==',key_name='tempest-keypair-2116352229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-mga8rw6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:28:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3901089a059c4bdb8d0497398873d2f1',uuid=60a233ad-302a-45ea-a78c-31ff4f06919e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.704 239460 DEBUG nova.network.os_vif_util [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.704 239460 DEBUG nova.network.os_vif_util [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:63:33,bridge_name='br-int',has_traffic_filtering=True,id=e169cad0-27e6-4099-aed1-80994ec6b573,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape169cad0-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.705 239460 DEBUG nova.objects.instance [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'pci_devices' on Instance uuid 60a233ad-302a-45ea-a78c-31ff4f06919e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.723 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <uuid>60a233ad-302a-45ea-a78c-31ff4f06919e</uuid>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <name>instance-00000011</name>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-1518611244</nova:name>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:28:39</nova:creationTime>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:user uuid="3901089a059c4bdb8d0497398873d2f1">tempest-TestVolumeBootPattern-1871389491-project-member</nova:user>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:project uuid="420f46ae230d4c529afe366a1b780921">tempest-TestVolumeBootPattern-1871389491</nova:project>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <nova:port uuid="e169cad0-27e6-4099-aed1-80994ec6b573">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <entry name="serial">60a233ad-302a-45ea-a78c-31ff4f06919e</entry>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <entry name="uuid">60a233ad-302a-45ea-a78c-31ff4f06919e</entry>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/60a233ad-302a-45ea-a78c-31ff4f06919e_disk.config">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-c7d61ea6-ae5a-4894-8166-55238a1d384e">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <serial>c7d61ea6-ae5a-4894-8166-55238a1d384e</serial>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:eb:63:33"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <target dev="tape169cad0-27"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/console.log" append="off"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:28:39 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:28:39 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:28:39 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:28:39 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.723 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Preparing to wait for external event network-vif-plugged-e169cad0-27e6-4099-aed1-80994ec6b573 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.724 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.724 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.724 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.725 239460 DEBUG nova.virt.libvirt.vif [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1518611244',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1518611244',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1518611244',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKySA50EaxzQB5p6K+5RoO+1u58vRcRzzkaFVlh7AgCu5iz7hwJw5cRUXS90xOqapy/lUThdOxCeLtsZuFMFUACxxtFu0BK2G+J6wGByeMurwKrEgC8uCS+2N5LgLkKS8Q==',key_name='tempest-keypair-2116352229',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-mga8rw6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:28:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3901089a059c4bdb8d0497398873d2f1',uuid=60a233ad-302a-45ea-a78c-31ff4f06919e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.725 239460 DEBUG nova.network.os_vif_util [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.726 239460 DEBUG nova.network.os_vif_util [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:63:33,bridge_name='br-int',has_traffic_filtering=True,id=e169cad0-27e6-4099-aed1-80994ec6b573,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape169cad0-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.726 239460 DEBUG os_vif [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:63:33,bridge_name='br-int',has_traffic_filtering=True,id=e169cad0-27e6-4099-aed1-80994ec6b573,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape169cad0-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.727 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.728 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.728 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.730 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.730 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape169cad0-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.731 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape169cad0-27, col_values=(('external_ids', {'iface-id': 'e169cad0-27e6-4099-aed1-80994ec6b573', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:63:33', 'vm-uuid': '60a233ad-302a-45ea-a78c-31ff4f06919e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.732 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:39 np0005601226 NetworkManager[49020]: <info>  [1769707719.7343] manager: (tape169cad0-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.735 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.737 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.737 239460 INFO os_vif [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:63:33,bridge_name='br-int',has_traffic_filtering=True,id=e169cad0-27e6-4099-aed1-80994ec6b573,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape169cad0-27')#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.783 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.784 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.784 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No VIF found with MAC fa:16:3e:eb:63:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.785 239460 INFO nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Using config drive#033[00m
Jan 29 12:28:39 np0005601226 nova_compute[239456]: 2026-01-29 17:28:39.802 239460 DEBUG nova.storage.rbd_utils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 60a233ad-302a-45ea-a78c-31ff4f06919e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:28:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 74 op/s
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.245 239460 INFO nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Creating config drive at /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/disk.config#033[00m
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.253 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu4pepc1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.289 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.289 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.290 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.376 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu4pepc1" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.396 239460 DEBUG nova.storage.rbd_utils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 60a233ad-302a-45ea-a78c-31ff4f06919e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.400 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/disk.config 60a233ad-302a-45ea-a78c-31ff4f06919e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:28:40
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['backups', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:28:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.846 239460 DEBUG oslo_concurrency.processutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/disk.config 60a233ad-302a-45ea-a78c-31ff4f06919e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.848 239460 INFO nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Deleting local config drive /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e/disk.config because it was imported into RBD.#033[00m
Jan 29 12:28:40 np0005601226 kernel: tape169cad0-27: entered promiscuous mode
Jan 29 12:28:40 np0005601226 NetworkManager[49020]: <info>  [1769707720.9020] manager: (tape169cad0-27): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.903 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1752813008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:40Z|00163|binding|INFO|Claiming lport e169cad0-27e6-4099-aed1-80994ec6b573 for this chassis.
Jan 29 12:28:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:40Z|00164|binding|INFO|e169cad0-27e6-4099-aed1-80994ec6b573: Claiming fa:16:3e:eb:63:33 10.100.0.5
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.911 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:63:33 10.100.0.5'], port_security=['fa:16:3e:eb:63:33 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '60a233ad-302a-45ea-a78c-31ff4f06919e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfa2c706-6c22-44dc-83b9-263dd9f118c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=e169cad0-27e6-4099-aed1-80994ec6b573) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.913 155625 INFO neutron.agent.ovn.metadata.agent [-] Port e169cad0-27e6-4099-aed1-80994ec6b573 in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b bound to our chassis#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.915 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.917 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:40Z|00165|binding|INFO|Setting lport e169cad0-27e6-4099-aed1-80994ec6b573 ovn-installed in OVS
Jan 29 12:28:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:40Z|00166|binding|INFO|Setting lport e169cad0-27e6-4099-aed1-80994ec6b573 up in Southbound
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.921 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:40 np0005601226 nova_compute[239456]: 2026-01-29 17:28:40.924 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.928 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[320382a4-8c57-448e-bde7-9e1ebdeb47a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.929 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c08c304-21 in ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:28:40 np0005601226 systemd-machined[207561]: New machine qemu-17-instance-00000011.
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.931 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c08c304-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.931 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c18279-1c2d-4c42-93c3-3e0845622cf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.933 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[69ab607f-935d-45f9-912a-5c2b68149b22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:40 np0005601226 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.944 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[af5fe5ad-6125-40ee-a834-69c59c3e82c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:40 np0005601226 systemd-udevd[264647]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.967 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8587e1dc-c78b-49a5-8d8e-abb878e489e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:40 np0005601226 NetworkManager[49020]: <info>  [1769707720.9683] device (tape169cad0-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:28:40 np0005601226 NetworkManager[49020]: <info>  [1769707720.9693] device (tape169cad0-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:28:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:40.998 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[db6de366-9269-4587-9b86-37a440d2ce1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 NetworkManager[49020]: <info>  [1769707721.0044] manager: (tap3c08c304-20): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.003 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[dc001ea5-edb0-4e49-8f44-f1ab444d5735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.030 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[903af275-1763-478d-a7e1-56af4f81be71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.036 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[c114ab75-95a7-4830-81b7-b0dfac89109b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 NetworkManager[49020]: <info>  [1769707721.0532] device (tap3c08c304-20): carrier: link connected
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.060 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[ceba7dce-02d4-423e-9567-1d55ac841c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.073 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[23385861-f41e-4f73-908f-b53956a73ae3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507455, 'reachable_time': 27606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264677, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.087 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8c74a526-6d4c-492f-b05b-a118b07b699b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:51ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507455, 'tstamp': 507455}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264678, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.100 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d6da25-ece4-4638-bd09-9e77c3948cf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507455, 'reachable_time': 27606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264679, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.121 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfca87f-b87b-4332-aa6a-e6520c769ac6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.134 239460 DEBUG nova.network.neutron [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updated VIF entry in instance network info cache for port e169cad0-27e6-4099-aed1-80994ec6b573. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.135 239460 DEBUG nova.network.neutron [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updating instance_info_cache with network_info: [{"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.155 239460 DEBUG oslo_concurrency.lockutils [req-fcd0f9ed-cece-44cf-bde1-e8887e89311f req-29271673-c7f7-4f58-bae3-26dff7e34338 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.178 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9d91a41a-8a10-4893-9074-96e796f10a67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.180 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.180 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.181 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:41 np0005601226 NetworkManager[49020]: <info>  [1769707721.1840] manager: (tap3c08c304-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 29 12:28:41 np0005601226 kernel: tap3c08c304-20: entered promiscuous mode
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.185 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:28:41 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:41Z|00167|binding|INFO|Releasing lport 4f9b16f1-6965-486d-bc02-ab1e4969963e from this chassis (sb_readonly=0)
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.188 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.189 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c3565251-fba6-4c35-90bf-8814098e4568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.190 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:28:41 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:28:41.191 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'env', 'PROCESS_TAG=haproxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c08c304-2b32-4b44-ac2b-279bb8b2403b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.195 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.294 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707721.2939658, 60a233ad-302a-45ea-a78c-31ff4f06919e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.295 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] VM Started (Lifecycle Event)#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.321 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.326 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707721.2941363, 60a233ad-302a-45ea-a78c-31ff4f06919e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.326 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.342 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.345 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.362 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:28:41 np0005601226 nova_compute[239456]: 2026-01-29 17:28:41.410 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:41 np0005601226 podman[264753]: 2026-01-29 17:28:41.530313123 +0000 UTC m=+0.064768094 container create e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 12:28:41 np0005601226 systemd[1]: Started libpod-conmon-e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52.scope.
Jan 29 12:28:41 np0005601226 podman[264753]: 2026-01-29 17:28:41.481891007 +0000 UTC m=+0.016345958 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:28:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19575e101d8a86947f5b761f91dc5304ec1b058be6521e87bb89a5d24fa6d254/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:41 np0005601226 podman[264753]: 2026-01-29 17:28:41.608974408 +0000 UTC m=+0.143429369 container init e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:28:41 np0005601226 podman[264753]: 2026-01-29 17:28:41.613087038 +0000 UTC m=+0.147541969 container start e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 29 12:28:41 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264769]: [NOTICE]   (264773) : New worker (264775) forked
Jan 29 12:28:41 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264769]: [NOTICE]   (264773) : Loading success.
Jan 29 12:28:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.1 KiB/s wr, 58 op/s
Jan 29 12:28:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Jan 29 12:28:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Jan 29 12:28:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Jan 29 12:28:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Jan 29 12:28:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Jan 29 12:28:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Jan 29 12:28:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.4 KiB/s wr, 65 op/s
Jan 29 12:28:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Jan 29 12:28:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Jan 29 12:28:43 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Jan 29 12:28:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893665678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2893665678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.639 239460 DEBUG nova.compute.manager [req-c2f5fc70-a5cd-414e-8a55-a4dec1e7bf4b req-a674705e-9c44-41f5-abab-5261ee29c53d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Received event network-vif-plugged-e169cad0-27e6-4099-aed1-80994ec6b573 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.640 239460 DEBUG oslo_concurrency.lockutils [req-c2f5fc70-a5cd-414e-8a55-a4dec1e7bf4b req-a674705e-9c44-41f5-abab-5261ee29c53d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.640 239460 DEBUG oslo_concurrency.lockutils [req-c2f5fc70-a5cd-414e-8a55-a4dec1e7bf4b req-a674705e-9c44-41f5-abab-5261ee29c53d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.641 239460 DEBUG oslo_concurrency.lockutils [req-c2f5fc70-a5cd-414e-8a55-a4dec1e7bf4b req-a674705e-9c44-41f5-abab-5261ee29c53d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.641 239460 DEBUG nova.compute.manager [req-c2f5fc70-a5cd-414e-8a55-a4dec1e7bf4b req-a674705e-9c44-41f5-abab-5261ee29c53d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Processing event network-vif-plugged-e169cad0-27e6-4099-aed1-80994ec6b573 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.642 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.685 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707724.6823082, 60a233ad-302a-45ea-a78c-31ff4f06919e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.686 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.688 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.692 239460 INFO nova.virt.libvirt.driver [-] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Instance spawned successfully.#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.693 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.714 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.724 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.730 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.731 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.732 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.733 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.733 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.734 239460 DEBUG nova.virt.libvirt.driver [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.740 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.745 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.802 239460 INFO nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Took 8.26 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.803 239460 DEBUG nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.881 239460 INFO nova.compute.manager [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Took 10.92 seconds to build instance.#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.905 239460 DEBUG oslo_concurrency.lockutils [None req-58244a5c-6509-442b-978d-e718f32e56a1 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.906 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.906 239460 INFO nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:28:44 np0005601226 nova_compute[239456]: 2026-01-29 17:28:44.906 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 133 op/s
Jan 29 12:28:46 np0005601226 nova_compute[239456]: 2026-01-29 17:28:46.411 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:28:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2356898793' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:28:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Jan 29 12:28:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Jan 29 12:28:46 np0005601226 nova_compute[239456]: 2026-01-29 17:28:46.991 239460 DEBUG nova.compute.manager [req-1f738eb3-a7dd-4da8-9464-e92758807063 req-1a11603a-6517-4f35-a1c1-52a7bf2b017c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Received event network-vif-plugged-e169cad0-27e6-4099-aed1-80994ec6b573 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:46 np0005601226 nova_compute[239456]: 2026-01-29 17:28:46.991 239460 DEBUG oslo_concurrency.lockutils [req-1f738eb3-a7dd-4da8-9464-e92758807063 req-1a11603a-6517-4f35-a1c1-52a7bf2b017c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:28:46 np0005601226 nova_compute[239456]: 2026-01-29 17:28:46.992 239460 DEBUG oslo_concurrency.lockutils [req-1f738eb3-a7dd-4da8-9464-e92758807063 req-1a11603a-6517-4f35-a1c1-52a7bf2b017c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:28:46 np0005601226 nova_compute[239456]: 2026-01-29 17:28:46.992 239460 DEBUG oslo_concurrency.lockutils [req-1f738eb3-a7dd-4da8-9464-e92758807063 req-1a11603a-6517-4f35-a1c1-52a7bf2b017c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:28:46 np0005601226 nova_compute[239456]: 2026-01-29 17:28:46.992 239460 DEBUG nova.compute.manager [req-1f738eb3-a7dd-4da8-9464-e92758807063 req-1a11603a-6517-4f35-a1c1-52a7bf2b017c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] No waiting events found dispatching network-vif-plugged-e169cad0-27e6-4099-aed1-80994ec6b573 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:28:46 np0005601226 nova_compute[239456]: 2026-01-29 17:28:46.992 239460 WARNING nova.compute.manager [req-1f738eb3-a7dd-4da8-9464-e92758807063 req-1a11603a-6517-4f35-a1c1-52a7bf2b017c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Received unexpected event network-vif-plugged-e169cad0-27e6-4099-aed1-80994ec6b573 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:28:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Jan 29 12:28:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 134 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 32 KiB/s wr, 132 op/s
Jan 29 12:28:47 np0005601226 nova_compute[239456]: 2026-01-29 17:28:47.960 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:47 np0005601226 NetworkManager[49020]: <info>  [1769707727.9621] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Jan 29 12:28:47 np0005601226 NetworkManager[49020]: <info>  [1769707727.9631] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Jan 29 12:28:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Jan 29 12:28:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Jan 29 12:28:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Jan 29 12:28:48 np0005601226 nova_compute[239456]: 2026-01-29 17:28:48.029 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:48 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:48Z|00168|binding|INFO|Releasing lport 4f9b16f1-6965-486d-bc02-ab1e4969963e from this chassis (sb_readonly=0)
Jan 29 12:28:48 np0005601226 nova_compute[239456]: 2026-01-29 17:28:48.044 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Jan 29 12:28:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Jan 29 12:28:49 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Jan 29 12:28:49 np0005601226 nova_compute[239456]: 2026-01-29 17:28:49.068 239460 DEBUG nova.compute.manager [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Received event network-changed-e169cad0-27e6-4099-aed1-80994ec6b573 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:28:49 np0005601226 nova_compute[239456]: 2026-01-29 17:28:49.068 239460 DEBUG nova.compute.manager [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Refreshing instance network info cache due to event network-changed-e169cad0-27e6-4099-aed1-80994ec6b573. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:28:49 np0005601226 nova_compute[239456]: 2026-01-29 17:28:49.068 239460 DEBUG oslo_concurrency.lockutils [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:28:49 np0005601226 nova_compute[239456]: 2026-01-29 17:28:49.068 239460 DEBUG oslo_concurrency.lockutils [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:28:49 np0005601226 nova_compute[239456]: 2026-01-29 17:28:49.069 239460 DEBUG nova.network.neutron [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Refreshing network info cache for port e169cad0-27e6-4099-aed1-80994ec6b573 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:28:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3686578589' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3686578589' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:49 np0005601226 nova_compute[239456]: 2026-01-29 17:28:49.740 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 33 KiB/s wr, 244 op/s
Jan 29 12:28:49 np0005601226 nova_compute[239456]: 2026-01-29 17:28:49.910 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Jan 29 12:28:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Jan 29 12:28:50 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Jan 29 12:28:50 np0005601226 nova_compute[239456]: 2026-01-29 17:28:50.265 239460 DEBUG nova.network.neutron [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updated VIF entry in instance network info cache for port e169cad0-27e6-4099-aed1-80994ec6b573. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:28:50 np0005601226 nova_compute[239456]: 2026-01-29 17:28:50.265 239460 DEBUG nova.network.neutron [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updating instance_info_cache with network_info: [{"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:28:50 np0005601226 nova_compute[239456]: 2026-01-29 17:28:50.282 239460 DEBUG oslo_concurrency.lockutils [req-5a60c86a-6a2b-44ca-9ed9-8037c3eed56d req-0eebbd63-7ae4-4512-917d-47dca2532c0f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:28:51 np0005601226 nova_compute[239456]: 2026-01-29 17:28:51.413 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.7172618939741956e-06 of space, bias 1.0, pg target 0.0011151785681922587 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0007068987756515376 of space, bias 1.0, pg target 0.2120696326954613 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.924971874426094e-06 of space, bias 1.0, pg target 0.001177491562327828 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671290910903028 of space, bias 1.0, pg target 0.20013872732709082 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4802382903196247e-06 of space, bias 4.0, pg target 0.0017762859483835497 quantized to 16 (current 16)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:28:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 KiB/s wr, 193 op/s
Jan 29 12:28:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Jan 29 12:28:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Jan 29 12:28:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Jan 29 12:28:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.8 KiB/s wr, 214 op/s
Jan 29 12:28:53 np0005601226 podman[264785]: 2026-01-29 17:28:53.899713979 +0000 UTC m=+0.061242571 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Jan 29 12:28:53 np0005601226 podman[264786]: 2026-01-29 17:28:53.914027801 +0000 UTC m=+0.080380931 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:28:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Jan 29 12:28:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Jan 29 12:28:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Jan 29 12:28:54 np0005601226 nova_compute[239456]: 2026-01-29 17:28:54.742 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:28:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:28:55 np0005601226 podman[264974]: 2026-01-29 17:28:55.76903565 +0000 UTC m=+0.046270389 container create 0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 12:28:55 np0005601226 systemd[1]: Started libpod-conmon-0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107.scope.
Jan 29 12:28:55 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:55 np0005601226 podman[264974]: 2026-01-29 17:28:55.749640491 +0000 UTC m=+0.026875320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:28:55 np0005601226 podman[264974]: 2026-01-29 17:28:55.848541838 +0000 UTC m=+0.125776587 container init 0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:28:55 np0005601226 podman[264974]: 2026-01-29 17:28:55.854802025 +0000 UTC m=+0.132036764 container start 0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_villani, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 29 12:28:55 np0005601226 podman[264974]: 2026-01-29 17:28:55.85833866 +0000 UTC m=+0.135573399 container attach 0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:28:55 np0005601226 dreamy_villani[264991]: 167 167
Jan 29 12:28:55 np0005601226 systemd[1]: libpod-0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107.scope: Deactivated successfully.
Jan 29 12:28:55 np0005601226 conmon[264991]: conmon 0ee604c34793a8d597a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107.scope/container/memory.events
Jan 29 12:28:55 np0005601226 podman[264974]: 2026-01-29 17:28:55.860392224 +0000 UTC m=+0.137626953 container died 0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_villani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:28:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 147 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 2.3 MiB/s wr, 162 op/s
Jan 29 12:28:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e45bf932604e56a61464f8a163d124cac48799617c7c2468514245c257998f39-merged.mount: Deactivated successfully.
Jan 29 12:28:55 np0005601226 podman[264974]: 2026-01-29 17:28:55.907271639 +0000 UTC m=+0.184506378 container remove 0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_villani, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:28:55 np0005601226 systemd[1]: libpod-conmon-0ee604c34793a8d597a0929531ecd43fc0fb024e39fd845b0c38c9e06274b107.scope: Deactivated successfully.
Jan 29 12:28:56 np0005601226 podman[265015]: 2026-01-29 17:28:56.024964189 +0000 UTC m=+0.030980200 container create 4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 29 12:28:56 np0005601226 systemd[1]: Started libpod-conmon-4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725.scope.
Jan 29 12:28:56 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab040ea412013fa39ee4c6fbd2a85d39c82da676b3d33c8c99767e53c08f97c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab040ea412013fa39ee4c6fbd2a85d39c82da676b3d33c8c99767e53c08f97c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab040ea412013fa39ee4c6fbd2a85d39c82da676b3d33c8c99767e53c08f97c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab040ea412013fa39ee4c6fbd2a85d39c82da676b3d33c8c99767e53c08f97c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab040ea412013fa39ee4c6fbd2a85d39c82da676b3d33c8c99767e53c08f97c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:56 np0005601226 podman[265015]: 2026-01-29 17:28:56.085698694 +0000 UTC m=+0.091714725 container init 4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cohen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 12:28:56 np0005601226 podman[265015]: 2026-01-29 17:28:56.093747429 +0000 UTC m=+0.099763440 container start 4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:28:56 np0005601226 podman[265015]: 2026-01-29 17:28:56.096909894 +0000 UTC m=+0.102925925 container attach 4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cohen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 12:28:56 np0005601226 podman[265015]: 2026-01-29 17:28:56.011601751 +0000 UTC m=+0.017617872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:28:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:28:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:28:56 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:28:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Jan 29 12:28:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Jan 29 12:28:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Jan 29 12:28:56 np0005601226 nova_compute[239456]: 2026-01-29 17:28:56.415 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:56 np0005601226 gracious_cohen[265031]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:28:56 np0005601226 gracious_cohen[265031]: --> All data devices are unavailable
Jan 29 12:28:56 np0005601226 systemd[1]: libpod-4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725.scope: Deactivated successfully.
Jan 29 12:28:56 np0005601226 podman[265051]: 2026-01-29 17:28:56.530415644 +0000 UTC m=+0.023514570 container died 4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cohen, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 12:28:56 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ab040ea412013fa39ee4c6fbd2a85d39c82da676b3d33c8c99767e53c08f97c3-merged.mount: Deactivated successfully.
Jan 29 12:28:56 np0005601226 podman[265051]: 2026-01-29 17:28:56.572902171 +0000 UTC m=+0.066001067 container remove 4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 12:28:56 np0005601226 systemd[1]: libpod-conmon-4411ab7060445bb61447655c3d249a626dd8a77bdb5d1417079d8184a41bf725.scope: Deactivated successfully.
Jan 29 12:28:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:56Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:eb:63:33 10.100.0.5
Jan 29 12:28:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:28:56Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:eb:63:33 10.100.0.5
Jan 29 12:28:56 np0005601226 podman[265128]: 2026-01-29 17:28:56.952230661 +0000 UTC m=+0.033797875 container create d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kirch, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 12:28:56 np0005601226 systemd[1]: Started libpod-conmon-d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933.scope.
Jan 29 12:28:57 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:57 np0005601226 podman[265128]: 2026-01-29 17:28:57.017161558 +0000 UTC m=+0.098728792 container init d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 12:28:57 np0005601226 podman[265128]: 2026-01-29 17:28:57.022238185 +0000 UTC m=+0.103805439 container start d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 29 12:28:57 np0005601226 infallible_kirch[265144]: 167 167
Jan 29 12:28:57 np0005601226 systemd[1]: libpod-d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933.scope: Deactivated successfully.
Jan 29 12:28:57 np0005601226 podman[265128]: 2026-01-29 17:28:57.027078614 +0000 UTC m=+0.108645868 container attach d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kirch, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:28:57 np0005601226 podman[265128]: 2026-01-29 17:28:57.028829631 +0000 UTC m=+0.110396855 container died d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:28:57 np0005601226 podman[265128]: 2026-01-29 17:28:56.938264898 +0000 UTC m=+0.019832172 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:28:57 np0005601226 systemd[1]: var-lib-containers-storage-overlay-fb6cca18bbeb5884f0dc97bcba9afc784b80ea8e164f1d5d856d8b95d88306cd-merged.mount: Deactivated successfully.
Jan 29 12:28:57 np0005601226 podman[265128]: 2026-01-29 17:28:57.064755232 +0000 UTC m=+0.146322456 container remove d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_kirch, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:28:57 np0005601226 systemd[1]: libpod-conmon-d3deebb89c5ef8ea3ef285dffed68bb64aea8e366dc7e4ba3d953e67cb3d2933.scope: Deactivated successfully.
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Jan 29 12:28:57 np0005601226 podman[265166]: 2026-01-29 17:28:57.194571936 +0000 UTC m=+0.029668755 container create 4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:28:57 np0005601226 systemd[1]: Started libpod-conmon-4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489.scope.
Jan 29 12:28:57 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5178351514f1544876b1edb465e3cfc6fa9fa7743d615470a6a07ae31d62807/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5178351514f1544876b1edb465e3cfc6fa9fa7743d615470a6a07ae31d62807/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5178351514f1544876b1edb465e3cfc6fa9fa7743d615470a6a07ae31d62807/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:57 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5178351514f1544876b1edb465e3cfc6fa9fa7743d615470a6a07ae31d62807/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:57 np0005601226 podman[265166]: 2026-01-29 17:28:57.263917132 +0000 UTC m=+0.099014001 container init 4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kapitsa, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:28:57 np0005601226 podman[265166]: 2026-01-29 17:28:57.269610554 +0000 UTC m=+0.104707403 container start 4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:28:57 np0005601226 podman[265166]: 2026-01-29 17:28:57.181977129 +0000 UTC m=+0.017073968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:28:57 np0005601226 podman[265166]: 2026-01-29 17:28:57.280220288 +0000 UTC m=+0.115317147 container attach 4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]: {
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:    "0": [
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:        {
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "devices": [
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "/dev/loop3"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            ],
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_name": "ceph_lv0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_size": "21470642176",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "name": "ceph_lv0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "tags": {
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cluster_name": "ceph",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.crush_device_class": "",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.encrypted": "0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.objectstore": "bluestore",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osd_id": "0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.type": "block",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.vdo": "0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.with_tpm": "0"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            },
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "type": "block",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "vg_name": "ceph_vg0"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:        }
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:    ],
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:    "1": [
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:        {
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "devices": [
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "/dev/loop4"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            ],
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_name": "ceph_lv1",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_size": "21470642176",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "name": "ceph_lv1",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "tags": {
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cluster_name": "ceph",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.crush_device_class": "",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.encrypted": "0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.objectstore": "bluestore",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osd_id": "1",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.type": "block",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.vdo": "0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.with_tpm": "0"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            },
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "type": "block",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "vg_name": "ceph_vg1"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:        }
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:    ],
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:    "2": [
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:        {
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "devices": [
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "/dev/loop5"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            ],
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_name": "ceph_lv2",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_size": "21470642176",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "name": "ceph_lv2",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "tags": {
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.cluster_name": "ceph",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.crush_device_class": "",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.encrypted": "0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.objectstore": "bluestore",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osd_id": "2",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.type": "block",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.vdo": "0",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:                "ceph.with_tpm": "0"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            },
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "type": "block",
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:            "vg_name": "ceph_vg2"
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:        }
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]:    ]
Jan 29 12:28:57 np0005601226 admiring_kapitsa[265183]: }
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1581364262' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:28:57 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1581364262' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:28:57 np0005601226 systemd[1]: libpod-4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489.scope: Deactivated successfully.
Jan 29 12:28:57 np0005601226 podman[265166]: 2026-01-29 17:28:57.590053799 +0000 UTC m=+0.425150638 container died 4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kapitsa, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:28:57 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a5178351514f1544876b1edb465e3cfc6fa9fa7743d615470a6a07ae31d62807-merged.mount: Deactivated successfully.
Jan 29 12:28:57 np0005601226 podman[265166]: 2026-01-29 17:28:57.630382278 +0000 UTC m=+0.465479097 container remove 4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 12:28:57 np0005601226 systemd[1]: libpod-conmon-4f6ce4b1b1a3ffbab1d0c500c5f8c21c73fa53448f573daf937f9eeae30e9489.scope: Deactivated successfully.
Jan 29 12:28:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 147 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 2.4 MiB/s wr, 138 op/s
Jan 29 12:28:58 np0005601226 podman[265265]: 2026-01-29 17:28:58.051162098 +0000 UTC m=+0.048463668 container create a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 29 12:28:58 np0005601226 systemd[1]: Started libpod-conmon-a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be.scope.
Jan 29 12:28:58 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:58 np0005601226 podman[265265]: 2026-01-29 17:28:58.03480026 +0000 UTC m=+0.032101860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:28:58 np0005601226 podman[265265]: 2026-01-29 17:28:58.135183446 +0000 UTC m=+0.132485016 container init a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:28:58 np0005601226 podman[265265]: 2026-01-29 17:28:58.139147613 +0000 UTC m=+0.136449173 container start a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 12:28:58 np0005601226 podman[265265]: 2026-01-29 17:28:58.142310737 +0000 UTC m=+0.139612297 container attach a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_maxwell, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:28:58 np0005601226 exciting_maxwell[265281]: 167 167
Jan 29 12:28:58 np0005601226 systemd[1]: libpod-a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be.scope: Deactivated successfully.
Jan 29 12:28:58 np0005601226 podman[265265]: 2026-01-29 17:28:58.143590491 +0000 UTC m=+0.140892051 container died a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 29 12:28:58 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0afaf56072e3d0f056218d6999d6fcfe61727d7cf125d4c025e2f9128e94242e-merged.mount: Deactivated successfully.
Jan 29 12:28:58 np0005601226 podman[265265]: 2026-01-29 17:28:58.18127968 +0000 UTC m=+0.178581250 container remove a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 29 12:28:58 np0005601226 systemd[1]: libpod-conmon-a8e110e2875a3ec3bb9fc5d3a7fdf5c58a7706852ce8239713f26de06315a3be.scope: Deactivated successfully.
Jan 29 12:28:58 np0005601226 podman[265305]: 2026-01-29 17:28:58.307556979 +0000 UTC m=+0.033158169 container create 11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_merkle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 29 12:28:58 np0005601226 systemd[1]: Started libpod-conmon-11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838.scope.
Jan 29 12:28:58 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:28:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849a16344429b5e05d094ed726ba4d10cc17bc2e72d3c1275c9fe1c31cce1669/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849a16344429b5e05d094ed726ba4d10cc17bc2e72d3c1275c9fe1c31cce1669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849a16344429b5e05d094ed726ba4d10cc17bc2e72d3c1275c9fe1c31cce1669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:58 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849a16344429b5e05d094ed726ba4d10cc17bc2e72d3c1275c9fe1c31cce1669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:28:58 np0005601226 podman[265305]: 2026-01-29 17:28:58.375453315 +0000 UTC m=+0.101054515 container init 11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 29 12:28:58 np0005601226 podman[265305]: 2026-01-29 17:28:58.380542632 +0000 UTC m=+0.106143822 container start 11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:28:58 np0005601226 podman[265305]: 2026-01-29 17:28:58.383392318 +0000 UTC m=+0.108993508 container attach 11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 12:28:58 np0005601226 podman[265305]: 2026-01-29 17:28:58.292981859 +0000 UTC m=+0.018583069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:28:58 np0005601226 lvm[265398]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:28:58 np0005601226 lvm[265398]: VG ceph_vg0 finished
Jan 29 12:28:58 np0005601226 lvm[265401]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:28:58 np0005601226 lvm[265401]: VG ceph_vg1 finished
Jan 29 12:28:59 np0005601226 lvm[265403]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:28:59 np0005601226 lvm[265403]: VG ceph_vg2 finished
Jan 29 12:28:59 np0005601226 condescending_merkle[265322]: {}
Jan 29 12:28:59 np0005601226 systemd[1]: libpod-11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838.scope: Deactivated successfully.
Jan 29 12:28:59 np0005601226 systemd[1]: libpod-11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838.scope: Consumed 1.059s CPU time.
Jan 29 12:28:59 np0005601226 podman[265305]: 2026-01-29 17:28:59.11331692 +0000 UTC m=+0.838918160 container died 11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:28:59 np0005601226 systemd[1]: var-lib-containers-storage-overlay-849a16344429b5e05d094ed726ba4d10cc17bc2e72d3c1275c9fe1c31cce1669-merged.mount: Deactivated successfully.
Jan 29 12:28:59 np0005601226 podman[265305]: 2026-01-29 17:28:59.152806347 +0000 UTC m=+0.878407537 container remove 11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:28:59 np0005601226 systemd[1]: libpod-conmon-11246f7de013490ece3d293f0ec2de71236cb6945cbc65ce191c60fe208a6838.scope: Deactivated successfully.
Jan 29 12:28:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:28:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:28:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:28:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:28:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:28:59 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:28:59 np0005601226 nova_compute[239456]: 2026-01-29 17:28:59.743 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:28:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 162 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.2 MiB/s wr, 235 op/s
Jan 29 12:29:01 np0005601226 nova_compute[239456]: 2026-01-29 17:29:01.418 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 171 op/s
Jan 29 12:29:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Jan 29 12:29:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Jan 29 12:29:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Jan 29 12:29:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.6 MiB/s wr, 147 op/s
Jan 29 12:29:04 np0005601226 nova_compute[239456]: 2026-01-29 17:29:04.745 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:29:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2197349242' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:29:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 128 op/s
Jan 29 12:29:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Jan 29 12:29:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Jan 29 12:29:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Jan 29 12:29:06 np0005601226 nova_compute[239456]: 2026-01-29 17:29:06.420 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Jan 29 12:29:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Jan 29 12:29:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Jan 29 12:29:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 70 KiB/s wr, 13 op/s
Jan 29 12:29:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Jan 29 12:29:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Jan 29 12:29:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Jan 29 12:29:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:08.864 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:29:08 np0005601226 nova_compute[239456]: 2026-01-29 17:29:08.865 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:08.866 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:29:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Jan 29 12:29:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Jan 29 12:29:09 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Jan 29 12:29:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:29:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1713466867' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:29:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:29:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1713466867' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:29:09 np0005601226 nova_compute[239456]: 2026-01-29 17:29:09.747 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.7 KiB/s wr, 80 op/s
Jan 29 12:29:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:29:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:29:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:29:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:29:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:29:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:29:11 np0005601226 nova_compute[239456]: 2026-01-29 17:29:11.423 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.9 KiB/s wr, 114 op/s
Jan 29 12:29:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:12 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:12.869 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:12 np0005601226 nova_compute[239456]: 2026-01-29 17:29:12.973 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:12 np0005601226 nova_compute[239456]: 2026-01-29 17:29:12.973 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:12 np0005601226 nova_compute[239456]: 2026-01-29 17:29:12.992 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.086 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.087 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.096 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.097 239460 INFO nova.compute.claims [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.241 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:29:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740246762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.779 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.784 239460 DEBUG nova.compute.provider_tree [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.802 239460 DEBUG nova.scheduler.client.report [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.825 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.826 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:29:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 167 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 5.8 KiB/s wr, 101 op/s
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.881 239460 INFO nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.883 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.883 239460 DEBUG nova.network.neutron [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.905 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:29:13 np0005601226 nova_compute[239456]: 2026-01-29 17:29:13.950 239460 INFO nova.virt.block_device [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Booting with volume snapshot 29c9622a-16ab-4b44-ad23-6da15e80a5dc at /dev/vda#033[00m
Jan 29 12:29:14 np0005601226 nova_compute[239456]: 2026-01-29 17:29:14.241 239460 DEBUG nova.policy [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3901089a059c4bdb8d0497398873d2f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '420f46ae230d4c529afe366a1b780921', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:29:14 np0005601226 nova_compute[239456]: 2026-01-29 17:29:14.747 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:14 np0005601226 nova_compute[239456]: 2026-01-29 17:29:14.826 239460 DEBUG nova.network.neutron [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Successfully created port: 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:29:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:29:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6915 writes, 31K keys, 6915 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6915 writes, 6915 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2090 writes, 9574 keys, 2090 commit groups, 1.0 writes per commit group, ingest: 12.72 MB, 0.02 MB/s#012Interval WAL: 2090 writes, 2090 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     20.7      1.74              0.07        16    0.108       0      0       0.0       0.0#012  L6      1/0    9.35 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     51.7     42.6      2.93              0.26        15    0.195     74K   8442       0.0       0.0#012 Sum      1/0    9.35 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.5     32.5     34.5      4.66              0.33        31    0.150     74K   8442       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     42.6     44.0      1.09              0.09         8    0.136     24K   2615       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     51.7     42.6      2.93              0.26        15    0.195     74K   8442       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     20.8      1.72              0.07        15    0.115       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.035, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 4.7 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d2b32758d0#2 capacity: 304.00 MB usage: 16.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000144 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1037,15.68 MB,5.15937%) FilterBlock(32,213.55 KB,0.0685993%) IndexBlock(32,413.78 KB,0.132922%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.616 239460 DEBUG nova.network.neutron [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Successfully updated port: 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.641 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.641 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquired lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.641 239460 DEBUG nova.network.neutron [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.775 239460 DEBUG nova.compute.manager [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-changed-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.776 239460 DEBUG nova.compute.manager [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Refreshing instance network info cache due to event network-changed-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.776 239460 DEBUG oslo_concurrency.lockutils [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:29:15 np0005601226 nova_compute[239456]: 2026-01-29 17:29:15.849 239460 DEBUG nova.network.neutron [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:29:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 207 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.8 MiB/s wr, 117 op/s
Jan 29 12:29:16 np0005601226 nova_compute[239456]: 2026-01-29 17:29:16.425 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:16 np0005601226 nova_compute[239456]: 2026-01-29 17:29:16.993 239460 DEBUG nova.network.neutron [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Updating instance_info_cache with network_info: [{"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:17 np0005601226 nova_compute[239456]: 2026-01-29 17:29:17.011 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Releasing lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:29:17 np0005601226 nova_compute[239456]: 2026-01-29 17:29:17.011 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Instance network_info: |[{"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:29:17 np0005601226 nova_compute[239456]: 2026-01-29 17:29:17.012 239460 DEBUG oslo_concurrency.lockutils [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:29:17 np0005601226 nova_compute[239456]: 2026-01-29 17:29:17.012 239460 DEBUG nova.network.neutron [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Refreshing network info cache for port 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.348897) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707757348967, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2722, "num_deletes": 528, "total_data_size": 3582241, "memory_usage": 3647632, "flush_reason": "Manual Compaction"}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707757371509, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3499563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29092, "largest_seqno": 31813, "table_properties": {"data_size": 3487148, "index_size": 7821, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 29065, "raw_average_key_size": 20, "raw_value_size": 3460224, "raw_average_value_size": 2461, "num_data_blocks": 337, "num_entries": 1406, "num_filter_entries": 1406, "num_deletions": 528, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707605, "oldest_key_time": 1769707605, "file_creation_time": 1769707757, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 22659 microseconds, and 10080 cpu microseconds.
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.371563) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3499563 bytes OK
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.371587) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.375123) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.375160) EVENT_LOG_v1 {"time_micros": 1769707757375152, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.375182) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3569487, prev total WAL file size 3569487, number of live WAL files 2.
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.376242) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3417KB)], [62(9578KB)]
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707757376308, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 13307664, "oldest_snapshot_seqno": -1}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6182 keys, 11343328 bytes, temperature: kUnknown
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707757466048, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 11343328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11294661, "index_size": 32109, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 155876, "raw_average_key_size": 25, "raw_value_size": 11176104, "raw_average_value_size": 1807, "num_data_blocks": 1291, "num_entries": 6182, "num_filter_entries": 6182, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707757, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.466429) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 11343328 bytes
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.468234) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 126.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7237, records dropped: 1055 output_compression: NoCompression
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.468263) EVENT_LOG_v1 {"time_micros": 1769707757468250, "job": 34, "event": "compaction_finished", "compaction_time_micros": 89926, "compaction_time_cpu_micros": 31326, "output_level": 6, "num_output_files": 1, "total_output_size": 11343328, "num_input_records": 7237, "num_output_records": 6182, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707757469148, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707757470715, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.376115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.470836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.470843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.470846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.470849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:29:17 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:29:17.470853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:29:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 207 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.5 MiB/s wr, 72 op/s
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.508 239460 DEBUG os_brick.utils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.509 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.524 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.524 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[a2be377e-94c8-4394-9087-88f8660aa3c0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.525 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.534 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.535 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[388ed16b-1ce4-4baa-8ffa-a1ad6a1b54c8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.536 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.544 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.545 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9bea28-cc94-4fbe-a4d6-63186ae97d51]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.546 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[3a80a3ac-1a63-44d3-a0cb-de1fd023a933]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.546 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.568 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.570 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.570 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.570 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.571 239460 DEBUG os_brick.utils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:29:18 np0005601226 nova_compute[239456]: 2026-01-29 17:29:18.571 239460 DEBUG nova.virt.block_device [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Updating existing volume attachment record: 49e674d0-7bc2-492a-92f3-4c035f233ab5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:29:19 np0005601226 nova_compute[239456]: 2026-01-29 17:29:19.204 239460 DEBUG nova.network.neutron [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Updated VIF entry in instance network info cache for port 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:29:19 np0005601226 nova_compute[239456]: 2026-01-29 17:29:19.204 239460 DEBUG nova.network.neutron [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Updating instance_info_cache with network_info: [{"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:19 np0005601226 nova_compute[239456]: 2026-01-29 17:29:19.218 239460 DEBUG oslo_concurrency.lockutils [req-fe829d5a-cfad-496d-a49e-d9818efbda79 req-79a40501-190a-44d7-8fbe-81f5036996f3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:29:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:29:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236521454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:29:19 np0005601226 nova_compute[239456]: 2026-01-29 17:29:19.749 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 277 MiB data, 494 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 11 MiB/s wr, 112 op/s
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.010 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.012 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.012 239460 INFO nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Creating image(s)#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.013 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.013 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Ensure instance console log exists: /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.013 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.014 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.014 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.016 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Start _get_guest_xml network_info=[{"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-29T17:29:06Z,direct_url=<?>,disk_format='qcow2',id=5ed2cc36-4069-42f1-8890-957be31da276,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1311667507',owner='420f46ae230d4c529afe366a1b780921',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-29T17:29:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '49e674d0-7bc2-492a-92f3-4c035f233ab5', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-911bb8b4-c9d5-413d-b3b5-c545494020cb', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '911bb8b4-c9d5-413d-b3b5-c545494020cb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ff3dd15f-f585-4406-8c70-96be2a8945a4', 'attached_at': '', 'detached_at': '', 'volume_id': '911bb8b4-c9d5-413d-b3b5-c545494020cb', 'serial': '911bb8b4-c9d5-413d-b3b5-c545494020cb'}, 'delete_on_termination': True, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.021 239460 WARNING nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.026 239460 DEBUG nova.virt.libvirt.host [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.027 239460 DEBUG nova.virt.libvirt.host [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.030 239460 DEBUG nova.virt.libvirt.host [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.031 239460 DEBUG nova.virt.libvirt.host [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.031 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.031 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2026-01-29T17:29:06Z,direct_url=<?>,disk_format='qcow2',id=5ed2cc36-4069-42f1-8890-957be31da276,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-1311667507',owner='420f46ae230d4c529afe366a1b780921',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2026-01-29T17:29:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.031 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.032 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.032 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.032 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.032 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.032 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.032 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.033 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.033 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.033 239460 DEBUG nova.virt.hardware [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.051 239460 DEBUG nova.storage.rbd_utils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image ff3dd15f-f585-4406-8c70-96be2a8945a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.055 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:29:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3669411913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.586 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.618 239460 DEBUG nova.virt.libvirt.vif [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:29:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-466521701',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-466521701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-466521701',id=18,image_ref='5ed2cc36-4069-42f1-8890-957be31da276',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEEPdOikZIRZZlSCB3pnSN883u5KEGoU6HmBl+bK9lybUNCBqnUpu265pHjvtrct4Ekt10vMEtBjAsdbZhHoGNnbDJYET7KS1yYvUhbnG7IzKHQwBptejozI0K/USR0uWw==',key_name='tempest-keypair-1228012133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-3vw2cmnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1871389491',image_owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:29:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3901089a059c4bdb8d0497398873d2f1',uuid=ff3dd15f-f585-4406-8c70-96be2a8945a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.619 239460 DEBUG nova.network.os_vif_util [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.619 239460 DEBUG nova.network.os_vif_util [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:f0:e1,bridge_name='br-int',has_traffic_filtering=True,id=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc3f8fa-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.620 239460 DEBUG nova.objects.instance [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'pci_devices' on Instance uuid ff3dd15f-f585-4406-8c70-96be2a8945a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.633 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <uuid>ff3dd15f-f585-4406-8c70-96be2a8945a4</uuid>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <name>instance-00000012</name>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-466521701</nova:name>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:29:20</nova:creationTime>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:user uuid="3901089a059c4bdb8d0497398873d2f1">tempest-TestVolumeBootPattern-1871389491-project-member</nova:user>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:project uuid="420f46ae230d4c529afe366a1b780921">tempest-TestVolumeBootPattern-1871389491</nova:project>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="5ed2cc36-4069-42f1-8890-957be31da276"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <nova:port uuid="3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <entry name="serial">ff3dd15f-f585-4406-8c70-96be2a8945a4</entry>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <entry name="uuid">ff3dd15f-f585-4406-8c70-96be2a8945a4</entry>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/ff3dd15f-f585-4406-8c70-96be2a8945a4_disk.config">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-911bb8b4-c9d5-413d-b3b5-c545494020cb">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <serial>911bb8b4-c9d5-413d-b3b5-c545494020cb</serial>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:5c:f0:e1"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <target dev="tap3cc3f8fa-0d"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/console.log" append="off"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <input type="keyboard" bus="usb"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:29:20 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:29:20 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:29:20 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:29:20 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.633 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Preparing to wait for external event network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.633 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.634 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.634 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.635 239460 DEBUG nova.virt.libvirt.vif [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:29:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-466521701',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-466521701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-466521701',id=18,image_ref='5ed2cc36-4069-42f1-8890-957be31da276',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEEPdOikZIRZZlSCB3pnSN883u5KEGoU6HmBl+bK9lybUNCBqnUpu265pHjvtrct4Ekt10vMEtBjAsdbZhHoGNnbDJYET7KS1yYvUhbnG7IzKHQwBptejozI0K/USR0uWw==',key_name='tempest-keypair-1228012133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-3vw2cmnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1871389491',image_owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:29:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3901089a059c4bdb8d0497398873d2f1',uuid=ff3dd15f-f585-4406-8c70-96be2a8945a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.635 239460 DEBUG nova.network.os_vif_util [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.636 239460 DEBUG nova.network.os_vif_util [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:f0:e1,bridge_name='br-int',has_traffic_filtering=True,id=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc3f8fa-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.636 239460 DEBUG os_vif [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:f0:e1,bridge_name='br-int',has_traffic_filtering=True,id=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc3f8fa-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.636 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.637 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.637 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.640 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.640 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cc3f8fa-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.640 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cc3f8fa-0d, col_values=(('external_ids', {'iface-id': '3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:f0:e1', 'vm-uuid': 'ff3dd15f-f585-4406-8c70-96be2a8945a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:20 np0005601226 NetworkManager[49020]: <info>  [1769707760.7006] manager: (tap3cc3f8fa-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.704 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.708 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "d8a8daad-7d66-42a9-b701-b191ca68564e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.708 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.709 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.710 239460 INFO os_vif [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:f0:e1,bridge_name='br-int',has_traffic_filtering=True,id=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc3f8fa-0d')#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.723 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.819 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.819 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.825 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.825 239460 INFO nova.compute.claims [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.832 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.832 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.832 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No VIF found with MAC fa:16:3e:5c:f0:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.833 239460 INFO nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Using config drive#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.850 239460 DEBUG nova.storage.rbd_utils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image ff3dd15f-f585-4406-8c70-96be2a8945a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:29:20 np0005601226 nova_compute[239456]: 2026-01-29 17:29:20.988 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.132 239460 INFO nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Creating config drive at /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/disk.config#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.140 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd9fltl8w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.273 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd9fltl8w" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.301 239460 DEBUG nova.storage.rbd_utils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image ff3dd15f-f585-4406-8c70-96be2a8945a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.305 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/disk.config ff3dd15f-f585-4406-8c70-96be2a8945a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.426 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:29:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/369362070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.502 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.508 239460 DEBUG nova.compute.provider_tree [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.531 239460 DEBUG nova.scheduler.client.report [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.547 239460 DEBUG oslo_concurrency.processutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/disk.config ff3dd15f-f585-4406-8c70-96be2a8945a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.548 239460 INFO nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Deleting local config drive /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4/disk.config because it was imported into RBD.#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.555 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.556 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:29:21 np0005601226 kernel: tap3cc3f8fa-0d: entered promiscuous mode
Jan 29 12:29:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:21Z|00169|binding|INFO|Claiming lport 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c for this chassis.
Jan 29 12:29:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:21Z|00170|binding|INFO|3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c: Claiming fa:16:3e:5c:f0:e1 10.100.0.13
Jan 29 12:29:21 np0005601226 NetworkManager[49020]: <info>  [1769707761.5996] manager: (tap3cc3f8fa-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.599 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:21Z|00171|binding|INFO|Setting lport 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c ovn-installed in OVS
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.604 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.604 239460 DEBUG nova.network.neutron [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:29:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:21Z|00172|binding|INFO|Setting lport 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c up in Southbound
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.607 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.608 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:f0:e1 10.100.0.13'], port_security=['fa:16:3e:5c:f0:e1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ff3dd15f-f585-4406-8c70-96be2a8945a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '2', 'neutron:security_group_ids': '22c4ada3-2dd0-468f-9196-c07d7ccdefd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.611 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b bound to our chassis#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.615 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.623 239460 INFO nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.628 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5e715a22-2115-4dd0-ad71-f649211dbc3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 systemd-machined[207561]: New machine qemu-18-instance-00000012.
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.643 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:29:21 np0005601226 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Jan 29 12:29:21 np0005601226 systemd-udevd[265610]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.657 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[fa91ea31-a534-462e-b634-5fbc93681620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 NetworkManager[49020]: <info>  [1769707761.6628] device (tap3cc3f8fa-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.661 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[8475ea76-a795-4f9c-9b4a-6282a8dcf9c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 NetworkManager[49020]: <info>  [1769707761.6639] device (tap3cc3f8fa-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.689 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[b936867f-ef5d-4f40-898d-305b1fe0590a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.692 239460 INFO nova.virt.block_device [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Booting with volume 980dbcd8-86dd-412e-92c6-97f0c6da44c6 at /dev/vda#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.705 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd397c9-c648-49ed-bfb8-c52050007c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507455, 'reachable_time': 27606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265620, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.717 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4406be58-4145-4f95-acb1-a4be21ed4a2f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507464, 'tstamp': 507464}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265621, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507467, 'tstamp': 507467}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265621, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.719 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.721 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.723 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.723 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.723 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:21.724 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.794 239460 DEBUG nova.policy [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4f278bc1afe946ca991a0203a74c5a7f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.858 239460 DEBUG nova.compute.manager [req-e2f10ba5-c726-4354-9bd4-be3e8d47e6a9 req-93a1db04-257b-4406-86d9-11023d33804a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.858 239460 DEBUG oslo_concurrency.lockutils [req-e2f10ba5-c726-4354-9bd4-be3e8d47e6a9 req-93a1db04-257b-4406-86d9-11023d33804a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.859 239460 DEBUG oslo_concurrency.lockutils [req-e2f10ba5-c726-4354-9bd4-be3e8d47e6a9 req-93a1db04-257b-4406-86d9-11023d33804a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.859 239460 DEBUG oslo_concurrency.lockutils [req-e2f10ba5-c726-4354-9bd4-be3e8d47e6a9 req-93a1db04-257b-4406-86d9-11023d33804a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.859 239460 DEBUG nova.compute.manager [req-e2f10ba5-c726-4354-9bd4-be3e8d47e6a9 req-93a1db04-257b-4406-86d9-11023d33804a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Processing event network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.860 239460 DEBUG os_brick.utils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.861 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.867 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.868 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a90518-f637-45d7-b0be-fa77b492cd44]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.869 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.873 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.873 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[d30daa22-0f63-4039-a790-2962545b6c7e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.874 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.879 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.879 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[4d58c5a9-0ac8-40db-8573-fcbefc837b2c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 281 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 11 MiB/s wr, 84 op/s
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.880 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f0d711-3d0b-4e63-b087-34b62b2ae4ec]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.881 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.897 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.899 239460 DEBUG os_brick.initiator.connectors.lightos [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.900 239460 DEBUG os_brick.initiator.connectors.lightos [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.900 239460 DEBUG os_brick.initiator.connectors.lightos [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.900 239460 DEBUG os_brick.utils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] <== get_connector_properties: return (39ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:29:21 np0005601226 nova_compute[239456]: 2026-01-29 17:29:21.900 239460 DEBUG nova.virt.block_device [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Updating existing volume attachment record: cc63efb9-13f0-45b7-907e-b4d256e6191d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:29:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:29:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2132581645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.671 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707762.671275, ff3dd15f-f585-4406-8c70-96be2a8945a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.672 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] VM Started (Lifecycle Event)#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.678 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.681 239460 DEBUG nova.virt.libvirt.driver [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.684 239460 INFO nova.virt.libvirt.driver [-] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Instance spawned successfully.#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.684 239460 INFO nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Took 2.67 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.684 239460 DEBUG nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.694 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.700 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.718 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.718 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707762.6745522, ff3dd15f-f585-4406-8c70-96be2a8945a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.718 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.726 239460 DEBUG nova.network.neutron [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Successfully created port: e0301c38-d2c5-4766-8243-5fb16ad5b084 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.745 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.748 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707762.6805584, ff3dd15f-f585-4406-8c70-96be2a8945a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.748 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.758 239460 INFO nova.compute.manager [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Took 9.71 seconds to build instance.#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.770 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.774 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.777 239460 DEBUG oslo_concurrency.lockutils [None req-b9e4d0d2-f06c-455e-8ed7-e5268d411873 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.948 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.950 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.951 239460 INFO nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Creating image(s)#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.951 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.951 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Ensure instance console log exists: /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.952 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.952 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:22 np0005601226 nova_compute[239456]: 2026-01-29 17:29:22.953 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.461 239460 DEBUG nova.network.neutron [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Successfully updated port: e0301c38-d2c5-4766-8243-5fb16ad5b084 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.485 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.485 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquired lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.485 239460 DEBUG nova.network.neutron [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.608 239460 DEBUG nova.compute.manager [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Received event network-changed-e0301c38-d2c5-4766-8243-5fb16ad5b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.608 239460 DEBUG nova.compute.manager [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Refreshing instance network info cache due to event network-changed-e0301c38-d2c5-4766-8243-5fb16ad5b084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.608 239460 DEBUG oslo_concurrency.lockutils [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:29:23 np0005601226 nova_compute[239456]: 2026-01-29 17:29:23.855 239460 DEBUG nova.network.neutron [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:29:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 281 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 11 MiB/s wr, 87 op/s
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.001 239460 DEBUG nova.compute.manager [req-22e05c0b-5eb0-428a-8a0a-5f1a42f1a2ec req-54c92d6a-2807-4bb2-95dc-b667a2ce0beb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.001 239460 DEBUG oslo_concurrency.lockutils [req-22e05c0b-5eb0-428a-8a0a-5f1a42f1a2ec req-54c92d6a-2807-4bb2-95dc-b667a2ce0beb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.001 239460 DEBUG oslo_concurrency.lockutils [req-22e05c0b-5eb0-428a-8a0a-5f1a42f1a2ec req-54c92d6a-2807-4bb2-95dc-b667a2ce0beb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.001 239460 DEBUG oslo_concurrency.lockutils [req-22e05c0b-5eb0-428a-8a0a-5f1a42f1a2ec req-54c92d6a-2807-4bb2-95dc-b667a2ce0beb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.002 239460 DEBUG nova.compute.manager [req-22e05c0b-5eb0-428a-8a0a-5f1a42f1a2ec req-54c92d6a-2807-4bb2-95dc-b667a2ce0beb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] No waiting events found dispatching network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.002 239460 WARNING nova.compute.manager [req-22e05c0b-5eb0-428a-8a0a-5f1a42f1a2ec req-54c92d6a-2807-4bb2-95dc-b667a2ce0beb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received unexpected event network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c for instance with vm_state active and task_state None.#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.836 239460 DEBUG nova.network.neutron [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Updating instance_info_cache with network_info: [{"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.853 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Releasing lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.854 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Instance network_info: |[{"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.854 239460 DEBUG oslo_concurrency.lockutils [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.854 239460 DEBUG nova.network.neutron [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Refreshing network info cache for port e0301c38-d2c5-4766-8243-5fb16ad5b084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.857 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Start _get_guest_xml network_info=[{"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'cc63efb9-13f0-45b7-907e-b4d256e6191d', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd8a8daad-7d66-42a9-b701-b191ca68564e', 'attached_at': '', 'detached_at': '', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'serial': '980dbcd8-86dd-412e-92c6-97f0c6da44c6'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.865 239460 WARNING nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.872 239460 DEBUG nova.virt.libvirt.host [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.873 239460 DEBUG nova.virt.libvirt.host [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.879 239460 DEBUG nova.virt.libvirt.host [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.880 239460 DEBUG nova.virt.libvirt.host [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.880 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.880 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.881 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.881 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.881 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.881 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.881 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.881 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.881 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.883 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.884 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.884 239460 DEBUG nova.virt.hardware [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:29:24 np0005601226 podman[265672]: 2026-01-29 17:29:24.896462367 +0000 UTC m=+0.062601976 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.912 239460 DEBUG nova.storage.rbd_utils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image d8a8daad-7d66-42a9-b701-b191ca68564e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:29:24 np0005601226 nova_compute[239456]: 2026-01-29 17:29:24.920 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:24 np0005601226 podman[265673]: 2026-01-29 17:29:24.947869322 +0000 UTC m=+0.113933449 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 29 12:29:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:29:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982685962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.514 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.699 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.804 239460 DEBUG os_brick.encryptors [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Using volume encryption metadata '{'encryption_key_id': 'f4c81305-2489-445f-884c-2b44511ff287', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd8a8daad-7d66-42a9-b701-b191ca68564e', 'attached_at': '', 'detached_at': '', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.807 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.825 239460 DEBUG barbicanclient.v1.secrets [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/f4c81305-2489-445f-884c-2b44511ff287 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.826 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.846 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.847 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.873 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.874 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.4 MiB/s wr, 127 op/s
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.906 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.907 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.937 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.938 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.984 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:25 np0005601226 nova_compute[239456]: 2026-01-29 17:29:25.984 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.019 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.020 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.047 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.048 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.069 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.070 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.100 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.101 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.120 239460 DEBUG nova.compute.manager [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-changed-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.120 239460 DEBUG nova.compute.manager [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Refreshing instance network info cache due to event network-changed-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.121 239460 DEBUG oslo_concurrency.lockutils [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.121 239460 DEBUG oslo_concurrency.lockutils [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.121 239460 DEBUG nova.network.neutron [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Refreshing network info cache for port 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.123 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.123 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.147 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.147 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.172 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.173 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.183 239460 DEBUG nova.network.neutron [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Updated VIF entry in instance network info cache for port e0301c38-d2c5-4766-8243-5fb16ad5b084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.184 239460 DEBUG nova.network.neutron [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Updating instance_info_cache with network_info: [{"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.206 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.206 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.222 239460 DEBUG oslo_concurrency.lockutils [req-ac7ae0b5-2963-4c8e-baa8-2c93441770d5 req-2449add1-251a-4dbb-9752-c271283486d1 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.228 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.228 239460 INFO barbicanclient.base [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/f4c81305-2489-445f-884c-2b44511ff287#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.248 239460 DEBUG barbicanclient.client [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.249 239460 DEBUG nova.virt.libvirt.host [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <volume>980dbcd8-86dd-412e-92c6-97f0c6da44c6</volume>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:29:26 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:29:26 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.275 239460 DEBUG nova.virt.libvirt.vif [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-394952439',display_name='tempest-TransferEncryptedVolumeTest-server-394952439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-394952439',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgD45Dm8MtAL32WS9smIaOmM8jSZyGBgHt0KuuP4tAN+PaFbPD2gY+bvOWoixBRmKRVNeRJWxYw4x1d/JqSF+Q3lf37438lc/Bafac9K9BPV+ZkjGBum9rZonwt+cLWAQ==',key_name='tempest-TransferEncryptedVolumeTest-1822701009',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-q835wci7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:29:21Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=d8a8daad-7d66-42a9-b701-b191ca68564e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.276 239460 DEBUG nova.network.os_vif_util [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.277 239460 DEBUG nova.network.os_vif_util [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=e0301c38-d2c5-4766-8243-5fb16ad5b084,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0301c38-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.279 239460 DEBUG nova.objects.instance [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'pci_devices' on Instance uuid d8a8daad-7d66-42a9-b701-b191ca68564e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.292 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <uuid>d8a8daad-7d66-42a9-b701-b191ca68564e</uuid>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <name>instance-00000013</name>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-394952439</nova:name>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:29:24</nova:creationTime>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:user uuid="4f278bc1afe946ca991a0203a74c5a7f">tempest-TransferEncryptedVolumeTest-1262552887-project-member</nova:user>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:project uuid="c74297072cc041019fc7ff4bff1a0f08">tempest-TransferEncryptedVolumeTest-1262552887</nova:project>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <nova:port uuid="e0301c38-d2c5-4766-8243-5fb16ad5b084">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <entry name="serial">d8a8daad-7d66-42a9-b701-b191ca68564e</entry>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <entry name="uuid">d8a8daad-7d66-42a9-b701-b191ca68564e</entry>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/d8a8daad-7d66-42a9-b701-b191ca68564e_disk.config">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-980dbcd8-86dd-412e-92c6-97f0c6da44c6">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <serial>980dbcd8-86dd-412e-92c6-97f0c6da44c6</serial>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <encryption format="luks">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:        <secret type="passphrase" uuid="a4202d2b-8a54-4c4b-8469-6b079f6eb6b8"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      </encryption>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:61:34:a1"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <target dev="tape0301c38-d2"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/console.log" append="off"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:29:26 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:29:26 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:29:26 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:29:26 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.298 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Preparing to wait for external event network-vif-plugged-e0301c38-d2c5-4766-8243-5fb16ad5b084 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.299 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.299 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.299 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.300 239460 DEBUG nova.virt.libvirt.vif [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-394952439',display_name='tempest-TransferEncryptedVolumeTest-server-394952439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-394952439',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgD45Dm8MtAL32WS9smIaOmM8jSZyGBgHt0KuuP4tAN+PaFbPD2gY+bvOWoixBRmKRVNeRJWxYw4x1d/JqSF+Q3lf37438lc/Bafac9K9BPV+ZkjGBum9rZonwt+cLWAQ==',key_name='tempest-TransferEncryptedVolumeTest-1822701009',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-q835wci7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:29:21Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=d8a8daad-7d66-42a9-b701-b191ca68564e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.301 239460 DEBUG nova.network.os_vif_util [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.301 239460 DEBUG nova.network.os_vif_util [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=e0301c38-d2c5-4766-8243-5fb16ad5b084,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0301c38-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.302 239460 DEBUG os_vif [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=e0301c38-d2c5-4766-8243-5fb16ad5b084,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0301c38-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.306 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.307 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.307 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.310 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.311 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0301c38-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.312 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0301c38-d2, col_values=(('external_ids', {'iface-id': 'e0301c38-d2c5-4766-8243-5fb16ad5b084', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:34:a1', 'vm-uuid': 'd8a8daad-7d66-42a9-b701-b191ca68564e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.313 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:26 np0005601226 NetworkManager[49020]: <info>  [1769707766.3154] manager: (tape0301c38-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.316 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.320 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.322 239460 INFO os_vif [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=e0301c38-d2c5-4766-8243-5fb16ad5b084,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0301c38-d2')#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.380 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.381 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.382 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No VIF found with MAC fa:16:3e:61:34:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.382 239460 INFO nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Using config drive#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.401 239460 DEBUG nova.storage.rbd_utils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image d8a8daad-7d66-42a9-b701-b191ca68564e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.429 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.684 239460 INFO nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Creating config drive at /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/disk.config#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.688 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprelb14ba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.808 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprelb14ba" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.826 239460 DEBUG nova.storage.rbd_utils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image d8a8daad-7d66-42a9-b701-b191ca68564e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.829 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/disk.config d8a8daad-7d66-42a9-b701-b191ca68564e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.943 239460 DEBUG oslo_concurrency.processutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/disk.config d8a8daad-7d66-42a9-b701-b191ca68564e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:26 np0005601226 nova_compute[239456]: 2026-01-29 17:29:26.945 239460 INFO nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Deleting local config drive /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e/disk.config because it was imported into RBD.#033[00m
Jan 29 12:29:27 np0005601226 NetworkManager[49020]: <info>  [1769707767.0088] manager: (tape0301c38-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Jan 29 12:29:27 np0005601226 kernel: tape0301c38-d2: entered promiscuous mode
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.014 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:27Z|00173|binding|INFO|Claiming lport e0301c38-d2c5-4766-8243-5fb16ad5b084 for this chassis.
Jan 29 12:29:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:27Z|00174|binding|INFO|e0301c38-d2c5-4766-8243-5fb16ad5b084: Claiming fa:16:3e:61:34:a1 10.100.0.8
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.023 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:34:a1 10.100.0.8'], port_security=['fa:16:3e:61:34:a1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd8a8daad-7d66-42a9-b701-b191ca68564e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45c1bbdb-777c-4906-ac59-7f4e97f55f2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=e0301c38-d2c5-4766-8243-5fb16ad5b084) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.021 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:27Z|00175|binding|INFO|Setting lport e0301c38-d2c5-4766-8243-5fb16ad5b084 up in Southbound
Jan 29 12:29:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:27Z|00176|binding|INFO|Setting lport e0301c38-d2c5-4766-8243-5fb16ad5b084 ovn-installed in OVS
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.024 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.024 155625 INFO neutron.agent.ovn.metadata.agent [-] Port e0301c38-d2c5-4766-8243-5fb16ad5b084 in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 bound to our chassis#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.025 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25cf1715-f178-4f65-be7c-cf203c28f072#033[00m
Jan 29 12:29:27 np0005601226 systemd-machined[207561]: New machine qemu-19-instance-00000013.
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.033 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[610c925b-3fa0-4901-9131-cd355e73314b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.033 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25cf1715-f1 in ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.035 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25cf1715-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.035 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[233367ff-064a-4348-a17e-ab7627f8e9c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.036 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[762e0805-75bc-4164-9157-479b00b5a7da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Jan 29 12:29:27 np0005601226 systemd-udevd[265831]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.047 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[8a40a972-90c7-4f2e-b4c4-6d7a0f6ffb34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 NetworkManager[49020]: <info>  [1769707767.0576] device (tape0301c38-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:29:27 np0005601226 NetworkManager[49020]: <info>  [1769707767.0581] device (tape0301c38-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.058 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d51c11d4-2b29-41f7-ad45-86405aff3f62]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.082 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf6eee0-b8c3-48e8-b4e0-a6b38d07ec32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 NetworkManager[49020]: <info>  [1769707767.0872] manager: (tap25cf1715-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Jan 29 12:29:27 np0005601226 systemd-udevd[265836]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.088 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[81c2071a-c184-4622-81e9-a34693450106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.110 239460 DEBUG nova.network.neutron [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Updated VIF entry in instance network info cache for port 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.111 239460 DEBUG nova.network.neutron [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Updating instance_info_cache with network_info: [{"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.116 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[669ec5c8-1daf-44a8-939b-6460a2a3982b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.120 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[5183adda-f51d-4cb2-bf38-8b8c4b4d0fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.134 239460 DEBUG oslo_concurrency.lockutils [req-13e538b1-07cf-43d6-b52b-172b0f4eacf5 req-642d4b52-c221-4ed6-a6de-83a83625b911 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ff3dd15f-f585-4406-8c70-96be2a8945a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:29:27 np0005601226 NetworkManager[49020]: <info>  [1769707767.1430] device (tap25cf1715-f0): carrier: link connected
Jan 29 12:29:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.146 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad7d287-1ca2-49f8-b041-6e7cf688f1d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.162 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[061ceba1-6da0-4d28-b353-9ad0905e4894]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512064, 'reachable_time': 34948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265862, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.175 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[90486f87-c4e6-4d9d-9f25-ab91ec3d21d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:50ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512064, 'tstamp': 512064}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265863, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.184 239460 DEBUG nova.compute.manager [req-35777ac9-5cf5-4f47-9f10-aba0b671e011 req-dcd2d71d-0593-433f-9025-1db4ffa53bbf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Received event network-vif-plugged-e0301c38-d2c5-4766-8243-5fb16ad5b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.185 239460 DEBUG oslo_concurrency.lockutils [req-35777ac9-5cf5-4f47-9f10-aba0b671e011 req-dcd2d71d-0593-433f-9025-1db4ffa53bbf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.185 239460 DEBUG oslo_concurrency.lockutils [req-35777ac9-5cf5-4f47-9f10-aba0b671e011 req-dcd2d71d-0593-433f-9025-1db4ffa53bbf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.185 239460 DEBUG oslo_concurrency.lockutils [req-35777ac9-5cf5-4f47-9f10-aba0b671e011 req-dcd2d71d-0593-433f-9025-1db4ffa53bbf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.186 239460 DEBUG nova.compute.manager [req-35777ac9-5cf5-4f47-9f10-aba0b671e011 req-dcd2d71d-0593-433f-9025-1db4ffa53bbf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Processing event network-vif-plugged-e0301c38-d2c5-4766-8243-5fb16ad5b084 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.193 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[45a332ca-4773-4924-905b-6388f3b235cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512064, 'reachable_time': 34948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265864, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.215 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab8a7cb-ee1d-44e7-8448-dd806de6adee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.261 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7c9ba0-3ee5-481a-ab95-b55973a43331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.263 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.263 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.264 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25cf1715-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.265 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 NetworkManager[49020]: <info>  [1769707767.2664] manager: (tap25cf1715-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Jan 29 12:29:27 np0005601226 kernel: tap25cf1715-f0: entered promiscuous mode
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.268 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.271 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25cf1715-f0, col_values=(('external_ids', {'iface-id': '82a91bf5-9093-4cbd-bfe4-f5d4b5400077'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.272 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:27Z|00177|binding|INFO|Releasing lport 82a91bf5-9093-4cbd-bfe4-f5d4b5400077 from this chassis (sb_readonly=0)
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.273 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.276 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.277 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ae16d3-8db7-4b53-9471-880b0ee865f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.278 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.280 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:27.281 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'env', 'PROCESS_TAG=haproxy-25cf1715-f178-4f65-be7c-cf203c28f072', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25cf1715-f178-4f65-be7c-cf203c28f072.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:29:27 np0005601226 podman[265896]: 2026-01-29 17:29:27.598327663 +0000 UTC m=+0.053840211 container create a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 12:29:27 np0005601226 nova_compute[239456]: 2026-01-29 17:29:27.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:27 np0005601226 systemd[1]: Started libpod-conmon-a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9.scope.
Jan 29 12:29:27 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:29:27 np0005601226 podman[265896]: 2026-01-29 17:29:27.57483383 +0000 UTC m=+0.030346388 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:29:27 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/908cae5bac2f4cbdf6989165e0512201b7f3c10ebdfe4e04c269f0af615cfca7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:29:27 np0005601226 podman[265896]: 2026-01-29 17:29:27.673446694 +0000 UTC m=+0.128959242 container init a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:29:27 np0005601226 podman[265896]: 2026-01-29 17:29:27.680408742 +0000 UTC m=+0.135921290 container start a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 29 12:29:27 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[265912]: [NOTICE]   (265916) : New worker (265918) forked
Jan 29 12:29:27 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[265912]: [NOTICE]   (265916) : Loading success.
Jan 29 12:29:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.9 MiB/s wr, 118 op/s
Jan 29 12:29:29 np0005601226 nova_compute[239456]: 2026-01-29 17:29:29.275 239460 DEBUG nova.compute.manager [req-a8a88db9-814a-4d03-9c53-c13ef93f7c2e req-b8356881-a4ef-4f6e-8519-74ac3ac7bca4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Received event network-vif-plugged-e0301c38-d2c5-4766-8243-5fb16ad5b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:29 np0005601226 nova_compute[239456]: 2026-01-29 17:29:29.275 239460 DEBUG oslo_concurrency.lockutils [req-a8a88db9-814a-4d03-9c53-c13ef93f7c2e req-b8356881-a4ef-4f6e-8519-74ac3ac7bca4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:29 np0005601226 nova_compute[239456]: 2026-01-29 17:29:29.276 239460 DEBUG oslo_concurrency.lockutils [req-a8a88db9-814a-4d03-9c53-c13ef93f7c2e req-b8356881-a4ef-4f6e-8519-74ac3ac7bca4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:29 np0005601226 nova_compute[239456]: 2026-01-29 17:29:29.276 239460 DEBUG oslo_concurrency.lockutils [req-a8a88db9-814a-4d03-9c53-c13ef93f7c2e req-b8356881-a4ef-4f6e-8519-74ac3ac7bca4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:29 np0005601226 nova_compute[239456]: 2026-01-29 17:29:29.276 239460 DEBUG nova.compute.manager [req-a8a88db9-814a-4d03-9c53-c13ef93f7c2e req-b8356881-a4ef-4f6e-8519-74ac3ac7bca4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] No waiting events found dispatching network-vif-plugged-e0301c38-d2c5-4766-8243-5fb16ad5b084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:29:29 np0005601226 nova_compute[239456]: 2026-01-29 17:29:29.276 239460 WARNING nova.compute.manager [req-a8a88db9-814a-4d03-9c53-c13ef93f7c2e req-b8356881-a4ef-4f6e-8519-74ac3ac7bca4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Received unexpected event network-vif-plugged-e0301c38-d2c5-4766-8243-5fb16ad5b084 for instance with vm_state building and task_state spawning.#033[00m
Jan 29 12:29:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.2 MiB/s wr, 127 op/s
Jan 29 12:29:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:29:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/98784086' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:29:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:29:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/98784086' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.874 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.875 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707770.8738353, d8a8daad-7d66-42a9-b701-b191ca68564e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.875 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] VM Started (Lifecycle Event)#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.879 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.881 239460 INFO nova.virt.libvirt.driver [-] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Instance spawned successfully.#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.882 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.892 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.895 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.904 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.904 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.905 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.905 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.905 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.905 239460 DEBUG nova.virt.libvirt.driver [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.911 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.912 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707770.8740897, d8a8daad-7d66-42a9-b701-b191ca68564e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.912 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.940 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.942 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707770.8781745, d8a8daad-7d66-42a9-b701-b191ca68564e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.942 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.966 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.968 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.978 239460 INFO nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Took 8.03 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.978 239460 DEBUG nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:29:30 np0005601226 nova_compute[239456]: 2026-01-29 17:29:30.991 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:29:31 np0005601226 nova_compute[239456]: 2026-01-29 17:29:31.033 239460 INFO nova.compute.manager [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Took 10.25 seconds to build instance.#033[00m
Jan 29 12:29:31 np0005601226 nova_compute[239456]: 2026-01-29 17:29:31.058 239460 DEBUG oslo_concurrency.lockutils [None req-583ee76d-67dc-4ff1-8769-7603bbb2dfda 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:31 np0005601226 nova_compute[239456]: 2026-01-29 17:29:31.314 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:31 np0005601226 nova_compute[239456]: 2026-01-29 17:29:31.431 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 371 KiB/s wr, 89 op/s
Jan 29 12:29:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:32 np0005601226 nova_compute[239456]: 2026-01-29 17:29:32.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:32 np0005601226 nova_compute[239456]: 2026-01-29 17:29:32.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:32 np0005601226 nova_compute[239456]: 2026-01-29 17:29:32.626 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:32 np0005601226 nova_compute[239456]: 2026-01-29 17:29:32.627 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:32 np0005601226 nova_compute[239456]: 2026-01-29 17:29:32.627 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:32 np0005601226 nova_compute[239456]: 2026-01-29 17:29:32.627 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:29:32 np0005601226 nova_compute[239456]: 2026-01-29 17:29:32.628 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:29:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060874257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.193 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.294 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.296 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.301 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.301 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.306 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.306 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.565 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.567 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4007MB free_disk=59.987746112048626GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.567 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.568 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.666 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 60a233ad-302a-45ea-a78c-31ff4f06919e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.666 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance ff3dd15f-f585-4406-8c70-96be2a8945a4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.667 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance d8a8daad-7d66-42a9-b701-b191ca68564e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.667 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.668 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:29:33 np0005601226 nova_compute[239456]: 2026-01-29 17:29:33.755 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 30 KiB/s wr, 109 op/s
Jan 29 12:29:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:29:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386635198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.343 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.351 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.372 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.406 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.407 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.841 239460 DEBUG nova.compute.manager [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Received event network-changed-e0301c38-d2c5-4766-8243-5fb16ad5b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.842 239460 DEBUG nova.compute.manager [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Refreshing instance network info cache due to event network-changed-e0301c38-d2c5-4766-8243-5fb16ad5b084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.842 239460 DEBUG oslo_concurrency.lockutils [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.843 239460 DEBUG oslo_concurrency.lockutils [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:29:34 np0005601226 nova_compute[239456]: 2026-01-29 17:29:34.843 239460 DEBUG nova.network.neutron [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Refreshing network info cache for port e0301c38-d2c5-4766-8243-5fb16ad5b084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:29:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 147 op/s
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.168 239460 DEBUG nova.network.neutron [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Updated VIF entry in instance network info cache for port e0301c38-d2c5-4766-8243-5fb16ad5b084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.169 239460 DEBUG nova.network.neutron [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Updating instance_info_cache with network_info: [{"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.192 239460 DEBUG oslo_concurrency.lockutils [req-0aa64fed-9c6b-4b52-a2ad-d6cd86b1b0e1 req-51d11774-f667-467b-85da-7cd5b3b19acf 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-d8a8daad-7d66-42a9-b701-b191ca68564e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.317 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.407 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.407 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.408 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.432 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.584 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.585 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.586 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:29:36 np0005601226 nova_compute[239456]: 2026-01-29 17:29:36.586 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 60a233ad-302a-45ea-a78c-31ff4f06919e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:29:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 281 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 92 op/s
Jan 29 12:29:37 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:37Z|00030|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.13
Jan 29 12:29:37 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:37Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5c:f0:e1 10.100.0.13
Jan 29 12:29:38 np0005601226 nova_compute[239456]: 2026-01-29 17:29:38.270 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updating instance_info_cache with network_info: [{"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:38 np0005601226 nova_compute[239456]: 2026-01-29 17:29:38.287 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-60a233ad-302a-45ea-a78c-31ff4f06919e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:29:38 np0005601226 nova_compute[239456]: 2026-01-29 17:29:38.287 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:29:38 np0005601226 nova_compute[239456]: 2026-01-29 17:29:38.288 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:38 np0005601226 nova_compute[239456]: 2026-01-29 17:29:38.288 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:39 np0005601226 nova_compute[239456]: 2026-01-29 17:29:39.479 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 295 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 507 KiB/s wr, 137 op/s
Jan 29 12:29:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:40.291 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:40.291 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:40.292 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:29:40
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:29:40 np0005601226 nova_compute[239456]: 2026-01-29 17:29:40.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:29:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:29:41 np0005601226 nova_compute[239456]: 2026-01-29 17:29:41.319 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:41 np0005601226 nova_compute[239456]: 2026-01-29 17:29:41.436 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:29:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16320944' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:29:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:29:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/16320944' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:29:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 296 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 494 KiB/s wr, 125 op/s
Jan 29 12:29:41 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:41Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.13
Jan 29 12:29:41 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:41Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:5c:f0:e1 10.100.0.13
Jan 29 12:29:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:42 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:42Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:f0:e1 10.100.0.13
Jan 29 12:29:42 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:42Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:f0:e1 10.100.0.13
Jan 29 12:29:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 299 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 540 KiB/s wr, 124 op/s
Jan 29 12:29:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:29:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1382470557' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:29:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:29:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1382470557' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:29:45 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:45Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:34:a1 10.100.0.8
Jan 29 12:29:45 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:45Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:34:a1 10.100.0.8
Jan 29 12:29:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 324 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.3 MiB/s wr, 160 op/s
Jan 29 12:29:46 np0005601226 nova_compute[239456]: 2026-01-29 17:29:46.321 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:46 np0005601226 nova_compute[239456]: 2026-01-29 17:29:46.436 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:46 np0005601226 nova_compute[239456]: 2026-01-29 17:29:46.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:29:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 324 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.3 MiB/s wr, 116 op/s
Jan 29 12:29:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:29:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3249972610' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:29:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:29:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3249972610' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:29:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 368 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.3 MiB/s wr, 171 op/s
Jan 29 12:29:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:29:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/664398754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:29:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:29:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/664398754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:29:51 np0005601226 nova_compute[239456]: 2026-01-29 17:29:51.323 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:51 np0005601226 nova_compute[239456]: 2026-01-29 17:29:51.440 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.919248824986798e-06 of space, bias 1.0, pg target 0.0026757746474960395 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003762171604891513 of space, bias 1.0, pg target 1.128651481467454 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.3309824991297613e-06 of space, bias 1.0, pg target 0.0009959637672397987 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006671650626894288 of space, bias 1.0, pg target 0.1994823537441392 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4805643168978062e-06 of space, bias 4.0, pg target 0.0017707549230097763 quantized to 16 (current 16)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 29 12:29:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 368 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 799 KiB/s rd, 5.8 MiB/s wr, 129 op/s
Jan 29 12:29:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 368 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 5.8 MiB/s wr, 122 op/s
Jan 29 12:29:55 np0005601226 podman[266017]: 2026-01-29 17:29:55.884989007 +0000 UTC m=+0.048719972 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:29:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 368 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 527 KiB/s rd, 5.8 MiB/s wr, 131 op/s
Jan 29 12:29:55 np0005601226 podman[266018]: 2026-01-29 17:29:55.907114743 +0000 UTC m=+0.071323681 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:29:56 np0005601226 nova_compute[239456]: 2026-01-29 17:29:56.326 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:56 np0005601226 nova_compute[239456]: 2026-01-29 17:29:56.442 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.874 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.874 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.874 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.875 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.875 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.876 239460 INFO nova.compute.manager [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Terminating instance#033[00m
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.877 239460 DEBUG nova.compute.manager [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:29:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 368 MiB data, 595 MiB used, 59 GiB / 60 GiB avail; 171 KiB/s rd, 3.1 MiB/s wr, 72 op/s
Jan 29 12:29:57 np0005601226 kernel: tap3cc3f8fa-0d (unregistering): left promiscuous mode
Jan 29 12:29:57 np0005601226 NetworkManager[49020]: <info>  [1769707797.9203] device (tap3cc3f8fa-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:29:57 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:57Z|00178|binding|INFO|Releasing lport 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c from this chassis (sb_readonly=0)
Jan 29 12:29:57 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:57Z|00179|binding|INFO|Setting lport 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c down in Southbound
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.924 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:57 np0005601226 ovn_controller[145556]: 2026-01-29T17:29:57Z|00180|binding|INFO|Removing iface tap3cc3f8fa-0d ovn-installed in OVS
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.926 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:57.931 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:f0:e1 10.100.0.13'], port_security=['fa:16:3e:5c:f0:e1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ff3dd15f-f585-4406-8c70-96be2a8945a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '4', 'neutron:security_group_ids': '22c4ada3-2dd0-468f-9196-c07d7ccdefd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:29:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:57.933 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b unbound from our chassis#033[00m
Jan 29 12:29:57 np0005601226 nova_compute[239456]: 2026-01-29 17:29:57.934 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:57.935 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:29:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:57.950 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[362cca6a-3344-4fa5-b882-e35409c1b4f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:57 np0005601226 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 29 12:29:57 np0005601226 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 16.249s CPU time.
Jan 29 12:29:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:57.974 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[68e359b6-fb8f-4830-bfb4-8aa511b2cd43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:57 np0005601226 systemd-machined[207561]: Machine qemu-18-instance-00000012 terminated.
Jan 29 12:29:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:57.976 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0b55b6-cd26-4a83-b33d-0396ed0ced53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:57 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:57.995 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[0f16e922-84dd-4b55-a8e9-11e4425be634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:58 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:58.010 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bec4873c-62c9-44a0-bc5e-848288ed2fdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507455, 'reachable_time': 19153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266073, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:58 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:58.022 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d3615635-10a9-4941-86df-9f67aac67e83]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507464, 'tstamp': 507464}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266074, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507467, 'tstamp': 507467}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266074, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:29:58 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:58.023 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.024 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.028 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:58 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:58.028 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:58 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:58.028 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:29:58 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:58.029 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:58 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:29:58.029 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.105 239460 INFO nova.virt.libvirt.driver [-] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Instance destroyed successfully.#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.106 239460 DEBUG nova.objects.instance [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'resources' on Instance uuid ff3dd15f-f585-4406-8c70-96be2a8945a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.121 239460 DEBUG nova.virt.libvirt.vif [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:29:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-466521701',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-466521701',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-466521701',id=18,image_ref='5ed2cc36-4069-42f1-8890-957be31da276',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEEPdOikZIRZZlSCB3pnSN883u5KEGoU6HmBl+bK9lybUNCBqnUpu265pHjvtrct4Ekt10vMEtBjAsdbZhHoGNnbDJYET7KS1yYvUhbnG7IzKHQwBptejozI0K/USR0uWw==',key_name='tempest-keypair-1228012133',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-3vw2cmnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1871389491',image_owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:29:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3901089a059c4bdb8d0497398873d2f1',uuid=ff3dd15f-f585-4406-8c70-96be2a8945a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.122 239460 DEBUG nova.network.os_vif_util [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "address": "fa:16:3e:5c:f0:e1", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cc3f8fa-0d", "ovs_interfaceid": "3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.123 239460 DEBUG nova.network.os_vif_util [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:f0:e1,bridge_name='br-int',has_traffic_filtering=True,id=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc3f8fa-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.123 239460 DEBUG os_vif [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:f0:e1,bridge_name='br-int',has_traffic_filtering=True,id=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc3f8fa-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.125 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.125 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cc3f8fa-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.171 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.172 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.175 239460 INFO os_vif [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:f0:e1,bridge_name='br-int',has_traffic_filtering=True,id=3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cc3f8fa-0d')#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.306 239460 INFO nova.virt.libvirt.driver [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Deleting instance files /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4_del#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.306 239460 INFO nova.virt.libvirt.driver [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Deletion of /var/lib/nova/instances/ff3dd15f-f585-4406-8c70-96be2a8945a4_del complete#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.371 239460 DEBUG nova.compute.manager [req-3f7eeccb-23fc-477e-bf07-99bb896968b5 req-032a27eb-9b21-4cb7-867e-ff29a3e3b6a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-vif-unplugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.371 239460 DEBUG oslo_concurrency.lockutils [req-3f7eeccb-23fc-477e-bf07-99bb896968b5 req-032a27eb-9b21-4cb7-867e-ff29a3e3b6a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.371 239460 DEBUG oslo_concurrency.lockutils [req-3f7eeccb-23fc-477e-bf07-99bb896968b5 req-032a27eb-9b21-4cb7-867e-ff29a3e3b6a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.372 239460 DEBUG oslo_concurrency.lockutils [req-3f7eeccb-23fc-477e-bf07-99bb896968b5 req-032a27eb-9b21-4cb7-867e-ff29a3e3b6a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.372 239460 DEBUG nova.compute.manager [req-3f7eeccb-23fc-477e-bf07-99bb896968b5 req-032a27eb-9b21-4cb7-867e-ff29a3e3b6a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] No waiting events found dispatching network-vif-unplugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.372 239460 DEBUG nova.compute.manager [req-3f7eeccb-23fc-477e-bf07-99bb896968b5 req-032a27eb-9b21-4cb7-867e-ff29a3e3b6a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-vif-unplugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.375 239460 INFO nova.compute.manager [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.375 239460 DEBUG oslo.service.loopingcall [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.376 239460 DEBUG nova.compute.manager [-] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:29:58 np0005601226 nova_compute[239456]: 2026-01-29 17:29:58.376 239460 DEBUG nova.network.neutron [-] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.368 239460 DEBUG nova.network.neutron [-] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.386 239460 INFO nova.compute.manager [-] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Took 1.01 seconds to deallocate network for instance.#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.447 239460 DEBUG nova.compute.manager [req-74b0abb9-e4a1-47af-8773-eaf760c79c49 req-c4166682-90d4-4a46-93d2-03727fbc657f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-vif-deleted-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.479 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.552 239460 INFO nova.compute.manager [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.553 239460 DEBUG nova.compute.manager [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Deleting volume: 911bb8b4-c9d5-413d-b3b5-c545494020cb _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.744 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.746 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:29:59 np0005601226 nova_compute[239456]: 2026-01-29 17:29:59.824 239460 DEBUG oslo_concurrency.processutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:29:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:29:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 368 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 179 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1853566638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1853566638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:30:00 np0005601226 podman[266270]: 2026-01-29 17:30:00.184810521 +0000 UTC m=+0.033320577 container create 0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 29 12:30:00 np0005601226 systemd[1]: Started libpod-conmon-0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0.scope.
Jan 29 12:30:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:00 np0005601226 podman[266270]: 2026-01-29 17:30:00.169749077 +0000 UTC m=+0.018259163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:30:00 np0005601226 podman[266270]: 2026-01-29 17:30:00.267815607 +0000 UTC m=+0.116325673 container init 0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:30:00 np0005601226 podman[266270]: 2026-01-29 17:30:00.272636576 +0000 UTC m=+0.121146642 container start 0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:30:00 np0005601226 podman[266270]: 2026-01-29 17:30:00.275529104 +0000 UTC m=+0.124039170 container attach 0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:30:00 np0005601226 eloquent_wilbur[266287]: 167 167
Jan 29 12:30:00 np0005601226 systemd[1]: libpod-0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0.scope: Deactivated successfully.
Jan 29 12:30:00 np0005601226 podman[266270]: 2026-01-29 17:30:00.277279712 +0000 UTC m=+0.125789778 container died 0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:30:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3a560a80ed46f5caefd438cfd45428a07ef66fd2d62bc43ba2f2f8d0cb82ecd0-merged.mount: Deactivated successfully.
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/622570041' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.343 239460 DEBUG oslo_concurrency.processutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:00 np0005601226 podman[266270]: 2026-01-29 17:30:00.348262542 +0000 UTC m=+0.196772608 container remove 0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.348 239460 DEBUG nova.compute.provider_tree [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.363 239460 DEBUG nova.scheduler.client.report [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:30:00 np0005601226 systemd[1]: libpod-conmon-0fd4c8f3d92ecb1ff7ad1fb256e8566d86695662c6617a9efd3fdbb21b7d0aa0.scope: Deactivated successfully.
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:30:00 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.383 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.405 239460 INFO nova.scheduler.client.report [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Deleted allocations for instance ff3dd15f-f585-4406-8c70-96be2a8945a4#033[00m
Jan 29 12:30:00 np0005601226 podman[266313]: 2026-01-29 17:30:00.468133799 +0000 UTC m=+0.035785365 container create 92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.473 239460 DEBUG oslo_concurrency.lockutils [None req-994f0a27-2c84-4b79-9323-99f5ca50a74f 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.494 239460 DEBUG nova.compute.manager [req-a3a701c5-d00a-49cc-88f4-759da1c632fb req-e17b65ff-c8dd-47ba-b007-9769c12a91fb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received event network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.494 239460 DEBUG oslo_concurrency.lockutils [req-a3a701c5-d00a-49cc-88f4-759da1c632fb req-e17b65ff-c8dd-47ba-b007-9769c12a91fb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.494 239460 DEBUG oslo_concurrency.lockutils [req-a3a701c5-d00a-49cc-88f4-759da1c632fb req-e17b65ff-c8dd-47ba-b007-9769c12a91fb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.494 239460 DEBUG oslo_concurrency.lockutils [req-a3a701c5-d00a-49cc-88f4-759da1c632fb req-e17b65ff-c8dd-47ba-b007-9769c12a91fb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ff3dd15f-f585-4406-8c70-96be2a8945a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.495 239460 DEBUG nova.compute.manager [req-a3a701c5-d00a-49cc-88f4-759da1c632fb req-e17b65ff-c8dd-47ba-b007-9769c12a91fb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] No waiting events found dispatching network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:30:00 np0005601226 nova_compute[239456]: 2026-01-29 17:30:00.495 239460 WARNING nova.compute.manager [req-a3a701c5-d00a-49cc-88f4-759da1c632fb req-e17b65ff-c8dd-47ba-b007-9769c12a91fb 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Received unexpected event network-vif-plugged-3cc3f8fa-0d50-4ddf-b2f1-086d9c2e1f4c for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:30:00 np0005601226 systemd[1]: Started libpod-conmon-92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c.scope.
Jan 29 12:30:00 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b84a899db8338daa55442f7c62816037840bb8fecb207d3e311133c87f651/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b84a899db8338daa55442f7c62816037840bb8fecb207d3e311133c87f651/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b84a899db8338daa55442f7c62816037840bb8fecb207d3e311133c87f651/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b84a899db8338daa55442f7c62816037840bb8fecb207d3e311133c87f651/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:00 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b89b84a899db8338daa55442f7c62816037840bb8fecb207d3e311133c87f651/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:00 np0005601226 podman[266313]: 2026-01-29 17:30:00.450482674 +0000 UTC m=+0.018134260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:30:00 np0005601226 podman[266313]: 2026-01-29 17:30:00.566300112 +0000 UTC m=+0.133951718 container init 92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 12:30:00 np0005601226 podman[266313]: 2026-01-29 17:30:00.572156159 +0000 UTC m=+0.139807725 container start 92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_aryabhata, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:30:00 np0005601226 podman[266313]: 2026-01-29 17:30:00.575378676 +0000 UTC m=+0.143030272 container attach 92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 12:30:00 np0005601226 priceless_aryabhata[266330]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:30:00 np0005601226 priceless_aryabhata[266330]: --> All data devices are unavailable
Jan 29 12:30:00 np0005601226 systemd[1]: libpod-92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c.scope: Deactivated successfully.
Jan 29 12:30:00 np0005601226 conmon[266330]: conmon 92b05a1a5070afb94dc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c.scope/container/memory.events
Jan 29 12:30:00 np0005601226 podman[266313]: 2026-01-29 17:30:00.961969234 +0000 UTC m=+0.529620800 container died 92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_aryabhata, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:30:00 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b89b84a899db8338daa55442f7c62816037840bb8fecb207d3e311133c87f651-merged.mount: Deactivated successfully.
Jan 29 12:30:00 np0005601226 podman[266313]: 2026-01-29 17:30:00.996631586 +0000 UTC m=+0.564283152 container remove 92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 29 12:30:01 np0005601226 systemd[1]: libpod-conmon-92b05a1a5070afb94dc398e7b60db4ba3c65e276583fbcf1ec91328ef388f92c.scope: Deactivated successfully.
Jan 29 12:30:01 np0005601226 podman[266424]: 2026-01-29 17:30:01.366053502 +0000 UTC m=+0.047009496 container create 279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cori, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 29 12:30:01 np0005601226 systemd[1]: Started libpod-conmon-279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c.scope.
Jan 29 12:30:01 np0005601226 podman[266424]: 2026-01-29 17:30:01.336763873 +0000 UTC m=+0.017719887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:30:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:01 np0005601226 nova_compute[239456]: 2026-01-29 17:30:01.444 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:01 np0005601226 podman[266424]: 2026-01-29 17:30:01.463536956 +0000 UTC m=+0.144492980 container init 279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:30:01 np0005601226 podman[266424]: 2026-01-29 17:30:01.470764861 +0000 UTC m=+0.151720855 container start 279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 29 12:30:01 np0005601226 boring_cori[266441]: 167 167
Jan 29 12:30:01 np0005601226 systemd[1]: libpod-279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c.scope: Deactivated successfully.
Jan 29 12:30:01 np0005601226 podman[266424]: 2026-01-29 17:30:01.477971474 +0000 UTC m=+0.158927488 container attach 279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cori, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 12:30:01 np0005601226 podman[266424]: 2026-01-29 17:30:01.479382102 +0000 UTC m=+0.160338136 container died 279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cori, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 29 12:30:01 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8f2975b58d64e3bfa468bc0354679b8eb98640617b4e710269c3c1031be12f97-merged.mount: Deactivated successfully.
Jan 29 12:30:01 np0005601226 podman[266424]: 2026-01-29 17:30:01.532953534 +0000 UTC m=+0.213909528 container remove 279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cori, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:30:01 np0005601226 systemd[1]: libpod-conmon-279927ef118a2314d837753da9d995fb14aaacdf572418b8b6b0e86e3ec6af6c.scope: Deactivated successfully.
Jan 29 12:30:01 np0005601226 podman[266464]: 2026-01-29 17:30:01.694318579 +0000 UTC m=+0.042803414 container create 6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 29 12:30:01 np0005601226 systemd[1]: Started libpod-conmon-6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd.scope.
Jan 29 12:30:01 np0005601226 podman[266464]: 2026-01-29 17:30:01.672318907 +0000 UTC m=+0.020803762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:30:01 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc883c209f7aad799a8ea5f4b46849a7061602bafaaddb5100f2b7bbaae9adf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc883c209f7aad799a8ea5f4b46849a7061602bafaaddb5100f2b7bbaae9adf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc883c209f7aad799a8ea5f4b46849a7061602bafaaddb5100f2b7bbaae9adf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:01 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc883c209f7aad799a8ea5f4b46849a7061602bafaaddb5100f2b7bbaae9adf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:01 np0005601226 podman[266464]: 2026-01-29 17:30:01.805883732 +0000 UTC m=+0.154368617 container init 6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:30:01 np0005601226 podman[266464]: 2026-01-29 17:30:01.811933445 +0000 UTC m=+0.160418280 container start 6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:30:01 np0005601226 podman[266464]: 2026-01-29 17:30:01.818143302 +0000 UTC m=+0.166628227 container attach 6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 12:30:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Jan 29 12:30:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Jan 29 12:30:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Jan 29 12:30:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 367 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 23 KiB/s wr, 42 op/s
Jan 29 12:30:02 np0005601226 adoring_buck[266480]: {
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:    "0": [
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:        {
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "devices": [
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "/dev/loop3"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            ],
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_name": "ceph_lv0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_size": "21470642176",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "name": "ceph_lv0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "tags": {
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cluster_name": "ceph",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.crush_device_class": "",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.encrypted": "0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.objectstore": "bluestore",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osd_id": "0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.type": "block",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.vdo": "0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.with_tpm": "0"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            },
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "type": "block",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "vg_name": "ceph_vg0"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:        }
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:    ],
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:    "1": [
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:        {
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "devices": [
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "/dev/loop4"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            ],
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_name": "ceph_lv1",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_size": "21470642176",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "name": "ceph_lv1",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "tags": {
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cluster_name": "ceph",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.crush_device_class": "",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.encrypted": "0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.objectstore": "bluestore",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osd_id": "1",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.type": "block",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.vdo": "0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.with_tpm": "0"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            },
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "type": "block",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "vg_name": "ceph_vg1"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:        }
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:    ],
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:    "2": [
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:        {
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "devices": [
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "/dev/loop5"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            ],
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_name": "ceph_lv2",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_size": "21470642176",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "name": "ceph_lv2",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "tags": {
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.cluster_name": "ceph",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.crush_device_class": "",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.encrypted": "0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.objectstore": "bluestore",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osd_id": "2",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.type": "block",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.vdo": "0",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:                "ceph.with_tpm": "0"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            },
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "type": "block",
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:            "vg_name": "ceph_vg2"
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:        }
Jan 29 12:30:02 np0005601226 adoring_buck[266480]:    ]
Jan 29 12:30:02 np0005601226 adoring_buck[266480]: }
Jan 29 12:30:02 np0005601226 systemd[1]: libpod-6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd.scope: Deactivated successfully.
Jan 29 12:30:02 np0005601226 podman[266464]: 2026-01-29 17:30:02.081866671 +0000 UTC m=+0.430351516 container died 6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:30:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2bc883c209f7aad799a8ea5f4b46849a7061602bafaaddb5100f2b7bbaae9adf-merged.mount: Deactivated successfully.
Jan 29 12:30:02 np0005601226 podman[266464]: 2026-01-29 17:30:02.128757384 +0000 UTC m=+0.477242259 container remove 6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:30:02 np0005601226 systemd[1]: libpod-conmon-6f810118d7584d8869889632ebe75836e56117d763139f6acff97c49e98646bd.scope: Deactivated successfully.
Jan 29 12:30:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.166 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "60a233ad-302a-45ea-a78c-31ff4f06919e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.167 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.168 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.168 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.168 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.169 239460 INFO nova.compute.manager [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Terminating instance#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.171 239460 DEBUG nova.compute.manager [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:30:02 np0005601226 kernel: tape169cad0-27 (unregistering): left promiscuous mode
Jan 29 12:30:02 np0005601226 NetworkManager[49020]: <info>  [1769707802.2273] device (tape169cad0-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:30:02 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:02Z|00181|binding|INFO|Releasing lport e169cad0-27e6-4099-aed1-80994ec6b573 from this chassis (sb_readonly=0)
Jan 29 12:30:02 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:02Z|00182|binding|INFO|Setting lport e169cad0-27e6-4099-aed1-80994ec6b573 down in Southbound
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.230 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:02Z|00183|binding|INFO|Removing iface tape169cad0-27 ovn-installed in OVS
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.232 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.243 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:63:33 10.100.0.5'], port_security=['fa:16:3e:eb:63:33 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '60a233ad-302a-45ea-a78c-31ff4f06919e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfa2c706-6c22-44dc-83b9-263dd9f118c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=e169cad0-27e6-4099-aed1-80994ec6b573) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.245 155625 INFO neutron.agent.ovn.metadata.agent [-] Port e169cad0-27e6-4099-aed1-80994ec6b573 in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b unbound from our chassis#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.247 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.248 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.249 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c68fbc31-9dac-49e6-8c9b-eaf09c6897f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.250 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace which is not needed anymore#033[00m
Jan 29 12:30:02 np0005601226 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 29 12:30:02 np0005601226 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 14.126s CPU time.
Jan 29 12:30:02 np0005601226 systemd-machined[207561]: Machine qemu-17-instance-00000011 terminated.
Jan 29 12:30:02 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264769]: [NOTICE]   (264773) : haproxy version is 2.8.14-c23fe91
Jan 29 12:30:02 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264769]: [NOTICE]   (264773) : path to executable is /usr/sbin/haproxy
Jan 29 12:30:02 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264769]: [WARNING]  (264773) : Exiting Master process...
Jan 29 12:30:02 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264769]: [ALERT]    (264773) : Current worker (264775) exited with code 143 (Terminated)
Jan 29 12:30:02 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[264769]: [WARNING]  (264773) : All workers exited. Exiting... (0)
Jan 29 12:30:02 np0005601226 systemd[1]: libpod-e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52.scope: Deactivated successfully.
Jan 29 12:30:02 np0005601226 podman[266576]: 2026-01-29 17:30:02.384787257 +0000 UTC m=+0.042927237 container died e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.399 239460 INFO nova.virt.libvirt.driver [-] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Instance destroyed successfully.#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.400 239460 DEBUG nova.objects.instance [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'resources' on Instance uuid 60a233ad-302a-45ea-a78c-31ff4f06919e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:30:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52-userdata-shm.mount: Deactivated successfully.
Jan 29 12:30:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-19575e101d8a86947f5b761f91dc5304ec1b058be6521e87bb89a5d24fa6d254-merged.mount: Deactivated successfully.
Jan 29 12:30:02 np0005601226 podman[266576]: 2026-01-29 17:30:02.425799801 +0000 UTC m=+0.083939781 container cleanup e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 29 12:30:02 np0005601226 systemd[1]: libpod-conmon-e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52.scope: Deactivated successfully.
Jan 29 12:30:02 np0005601226 podman[266615]: 2026-01-29 17:30:02.48039332 +0000 UTC m=+0.037584423 container remove e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.484 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6f30a1ef-b54b-49b3-a5ff-fe05b43268c9]: (4, ('Thu Jan 29 05:30:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52)\ne10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52\nThu Jan 29 05:30:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (e10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52)\ne10f16e657962d9bdea608b4b27be59e326409afcb19bb286be4901127da9e52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.486 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[141ed836-6543-41fe-b54d-fd45f46b8375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.487 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.488 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 kernel: tap3c08c304-20: left promiscuous mode
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.500 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.504 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d99317-c610-4835-b70d-cd4c128e589a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.529 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8e966028-ba38-423c-84dc-83830b16e835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.531 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4540d2-a53a-48ca-87af-7c7bec0d2bcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.531 239460 DEBUG nova.virt.libvirt.vif [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1518611244',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1518611244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1518611244',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKySA50EaxzQB5p6K+5RoO+1u58vRcRzzkaFVlh7AgCu5iz7hwJw5cRUXS90xOqapy/lUThdOxCeLtsZuFMFUACxxtFu0BK2G+J6wGByeMurwKrEgC8uCS+2N5LgLkKS8Q==',key_name='tempest-keypair-2116352229',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:28:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-mga8rw6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:28:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3901089a059c4bdb8d0497398873d2f1',uuid=60a233ad-302a-45ea-a78c-31ff4f06919e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.532 239460 DEBUG nova.network.os_vif_util [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "e169cad0-27e6-4099-aed1-80994ec6b573", "address": "fa:16:3e:eb:63:33", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape169cad0-27", "ovs_interfaceid": "e169cad0-27e6-4099-aed1-80994ec6b573", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.532 239460 DEBUG nova.network.os_vif_util [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:63:33,bridge_name='br-int',has_traffic_filtering=True,id=e169cad0-27e6-4099-aed1-80994ec6b573,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape169cad0-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.533 239460 DEBUG os_vif [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:63:33,bridge_name='br-int',has_traffic_filtering=True,id=e169cad0-27e6-4099-aed1-80994ec6b573,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape169cad0-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.534 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.534 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape169cad0-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.535 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.537 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.540 239460 INFO os_vif [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:63:33,bridge_name='br-int',has_traffic_filtering=True,id=e169cad0-27e6-4099-aed1-80994ec6b573,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape169cad0-27')#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.548 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ffabed0f-a1bb-4c63-b94a-329a955765f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507449, 'reachable_time': 35503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266651, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 systemd[1]: run-netns-ovnmeta\x2d3c08c304\x2d2b32\x2d4b44\x2dac2b\x2d279bb8b2403b.mount: Deactivated successfully.
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.550 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:30:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:02.551 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[d58ec3a6-3764-4098-bb82-72e084203b7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:02 np0005601226 podman[266644]: 2026-01-29 17:30:02.569421688 +0000 UTC m=+0.039693961 container create e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:30:02 np0005601226 systemd[1]: Started libpod-conmon-e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774.scope.
Jan 29 12:30:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:02 np0005601226 podman[266644]: 2026-01-29 17:30:02.631968681 +0000 UTC m=+0.102240964 container init e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pascal, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:30:02 np0005601226 podman[266644]: 2026-01-29 17:30:02.638418935 +0000 UTC m=+0.108691198 container start e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:30:02 np0005601226 stoic_pascal[266679]: 167 167
Jan 29 12:30:02 np0005601226 systemd[1]: libpod-e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774.scope: Deactivated successfully.
Jan 29 12:30:02 np0005601226 conmon[266679]: conmon e13d3c5f597f305e82aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774.scope/container/memory.events
Jan 29 12:30:02 np0005601226 podman[266644]: 2026-01-29 17:30:02.644761415 +0000 UTC m=+0.115033688 container attach e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pascal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:30:02 np0005601226 podman[266644]: 2026-01-29 17:30:02.645042333 +0000 UTC m=+0.115314606 container died e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 12:30:02 np0005601226 podman[266644]: 2026-01-29 17:30:02.555572265 +0000 UTC m=+0.025844568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:30:02 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1c979c851fb4160dde28ae52360301ab4da4cc9615984149e422271692cb5095-merged.mount: Deactivated successfully.
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.701 239460 INFO nova.virt.libvirt.driver [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Deleting instance files /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e_del#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.702 239460 INFO nova.virt.libvirt.driver [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Deletion of /var/lib/nova/instances/60a233ad-302a-45ea-a78c-31ff4f06919e_del complete#033[00m
Jan 29 12:30:02 np0005601226 podman[266644]: 2026-01-29 17:30:02.705046488 +0000 UTC m=+0.175318751 container remove e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_pascal, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:30:02 np0005601226 systemd[1]: libpod-conmon-e13d3c5f597f305e82aae535f9ca3216c87a4858082c33138a96ab50e64e1774.scope: Deactivated successfully.
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.764 239460 INFO nova.compute.manager [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.765 239460 DEBUG oslo.service.loopingcall [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.765 239460 DEBUG nova.compute.manager [-] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:30:02 np0005601226 nova_compute[239456]: 2026-01-29 17:30:02.765 239460 DEBUG nova.network.neutron [-] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:30:02 np0005601226 podman[266707]: 2026-01-29 17:30:02.827055303 +0000 UTC m=+0.038042176 container create 311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 29 12:30:02 np0005601226 systemd[1]: Started libpod-conmon-311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1.scope.
Jan 29 12:30:02 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d995419f0a2673e79017f65173f5a2e5cad2ef65ba18ac72705afb37990d1b1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d995419f0a2673e79017f65173f5a2e5cad2ef65ba18ac72705afb37990d1b1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d995419f0a2673e79017f65173f5a2e5cad2ef65ba18ac72705afb37990d1b1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:02 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d995419f0a2673e79017f65173f5a2e5cad2ef65ba18ac72705afb37990d1b1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:02 np0005601226 podman[266707]: 2026-01-29 17:30:02.892113744 +0000 UTC m=+0.103100637 container init 311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:30:02 np0005601226 podman[266707]: 2026-01-29 17:30:02.90050332 +0000 UTC m=+0.111490193 container start 311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wilbur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:30:02 np0005601226 podman[266707]: 2026-01-29 17:30:02.90310721 +0000 UTC m=+0.114094103 container attach 311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:30:02 np0005601226 podman[266707]: 2026-01-29 17:30:02.809597173 +0000 UTC m=+0.020584056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:30:03 np0005601226 nova_compute[239456]: 2026-01-29 17:30:03.361 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:03 np0005601226 lvm[266799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:30:03 np0005601226 lvm[266799]: VG ceph_vg0 finished
Jan 29 12:30:03 np0005601226 lvm[266801]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:30:03 np0005601226 lvm[266801]: VG ceph_vg1 finished
Jan 29 12:30:03 np0005601226 lvm[266802]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:30:03 np0005601226 lvm[266802]: VG ceph_vg2 finished
Jan 29 12:30:03 np0005601226 sad_wilbur[266724]: {}
Jan 29 12:30:03 np0005601226 podman[266707]: 2026-01-29 17:30:03.635579769 +0000 UTC m=+0.846566642 container died 311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 12:30:03 np0005601226 systemd[1]: libpod-311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1.scope: Deactivated successfully.
Jan 29 12:30:03 np0005601226 systemd[1]: libpod-311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1.scope: Consumed 1.017s CPU time.
Jan 29 12:30:03 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d995419f0a2673e79017f65173f5a2e5cad2ef65ba18ac72705afb37990d1b1d-merged.mount: Deactivated successfully.
Jan 29 12:30:03 np0005601226 podman[266707]: 2026-01-29 17:30:03.670703114 +0000 UTC m=+0.881689997 container remove 311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:30:03 np0005601226 systemd[1]: libpod-conmon-311cc1481650d8a238ae42aeab5d5af412946e5606556b80373771f59819f8a1.scope: Deactivated successfully.
Jan 29 12:30:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:30:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:30:03 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:30:03 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:30:03 np0005601226 nova_compute[239456]: 2026-01-29 17:30:03.792 239460 DEBUG nova.network.neutron [-] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:03 np0005601226 nova_compute[239456]: 2026-01-29 17:30:03.807 239460 INFO nova.compute.manager [-] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Took 1.04 seconds to deallocate network for instance.#033[00m
Jan 29 12:30:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 362 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 19 KiB/s wr, 52 op/s
Jan 29 12:30:03 np0005601226 nova_compute[239456]: 2026-01-29 17:30:03.988 239460 DEBUG nova.compute.manager [req-d81b1f95-9bcf-461c-a94e-67564ae72da7 req-38d89205-5235-4379-9e9d-ca27b8d73fce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Received event network-vif-deleted-e169cad0-27e6-4099-aed1-80994ec6b573 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:04 np0005601226 nova_compute[239456]: 2026-01-29 17:30:04.148 239460 INFO nova.compute.manager [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Took 0.34 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:30:04 np0005601226 nova_compute[239456]: 2026-01-29 17:30:04.150 239460 DEBUG nova.compute.manager [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Deleting volume: c7d61ea6-ae5a-4894-8166-55238a1d384e _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 29 12:30:04 np0005601226 nova_compute[239456]: 2026-01-29 17:30:04.404 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:04 np0005601226 nova_compute[239456]: 2026-01-29 17:30:04.405 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:04 np0005601226 nova_compute[239456]: 2026-01-29 17:30:04.496 239460 DEBUG oslo_concurrency.processutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:30:04 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:30:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:30:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1143045556' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:30:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:30:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1143045556' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:30:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822458281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:05 np0005601226 nova_compute[239456]: 2026-01-29 17:30:05.053 239460 DEBUG oslo_concurrency.processutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:05 np0005601226 nova_compute[239456]: 2026-01-29 17:30:05.059 239460 DEBUG nova.compute.provider_tree [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:30:05 np0005601226 nova_compute[239456]: 2026-01-29 17:30:05.077 239460 DEBUG nova.scheduler.client.report [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:30:05 np0005601226 nova_compute[239456]: 2026-01-29 17:30:05.098 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:05 np0005601226 nova_compute[239456]: 2026-01-29 17:30:05.123 239460 INFO nova.scheduler.client.report [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Deleted allocations for instance 60a233ad-302a-45ea-a78c-31ff4f06919e#033[00m
Jan 29 12:30:05 np0005601226 nova_compute[239456]: 2026-01-29 17:30:05.201 239460 DEBUG oslo_concurrency.lockutils [None req-ee85464e-efc6-41d8-a59c-64d29c8e66a9 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "60a233ad-302a-45ea-a78c-31ff4f06919e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 305 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 17 KiB/s wr, 89 op/s
Jan 29 12:30:06 np0005601226 nova_compute[239456]: 2026-01-29 17:30:06.481 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.247 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "d8a8daad-7d66-42a9-b701-b191ca68564e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.248 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.248 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.248 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.249 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.250 239460 INFO nova.compute.manager [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Terminating instance#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.251 239460 DEBUG nova.compute.manager [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:30:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Jan 29 12:30:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.440 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.570 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:07 np0005601226 kernel: tape0301c38-d2 (unregistering): left promiscuous mode
Jan 29 12:30:07 np0005601226 NetworkManager[49020]: <info>  [1769707807.8109] device (tape0301c38-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:30:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:07Z|00184|binding|INFO|Releasing lport e0301c38-d2c5-4766-8243-5fb16ad5b084 from this chassis (sb_readonly=0)
Jan 29 12:30:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:07Z|00185|binding|INFO|Setting lport e0301c38-d2c5-4766-8243-5fb16ad5b084 down in Southbound
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.815 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:07Z|00186|binding|INFO|Removing iface tape0301c38-d2 ovn-installed in OVS
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.819 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:07.823 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:34:a1 10.100.0.8'], port_security=['fa:16:3e:61:34:a1 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd8a8daad-7d66-42a9-b701-b191ca68564e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45c1bbdb-777c-4906-ac59-7f4e97f55f2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=e0301c38-d2c5-4766-8243-5fb16ad5b084) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:30:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:07.824 155625 INFO neutron.agent.ovn.metadata.agent [-] Port e0301c38-d2c5-4766-8243-5fb16ad5b084 in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 unbound from our chassis#033[00m
Jan 29 12:30:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:07.826 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25cf1715-f178-4f65-be7c-cf203c28f072, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:30:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:07.827 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3274ccda-ef45-4abc-b4da-efb8abb45015]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:07.827 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace which is not needed anymore#033[00m
Jan 29 12:30:07 np0005601226 nova_compute[239456]: 2026-01-29 17:30:07.832 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:07 np0005601226 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 29 12:30:07 np0005601226 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 17.999s CPU time.
Jan 29 12:30:07 np0005601226 systemd-machined[207561]: Machine qemu-19-instance-00000013 terminated.
Jan 29 12:30:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 305 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 6.1 KiB/s wr, 95 op/s
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.065 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.068 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.080 239460 INFO nova.virt.libvirt.driver [-] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Instance destroyed successfully.#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.080 239460 DEBUG nova.objects.instance [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'resources' on Instance uuid d8a8daad-7d66-42a9-b701-b191ca68564e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.096 239460 DEBUG nova.virt.libvirt.vif [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:29:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-394952439',display_name='tempest-TransferEncryptedVolumeTest-server-394952439',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-394952439',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgD45Dm8MtAL32WS9smIaOmM8jSZyGBgHt0KuuP4tAN+PaFbPD2gY+bvOWoixBRmKRVNeRJWxYw4x1d/JqSF+Q3lf37438lc/Bafac9K9BPV+ZkjGBum9rZonwt+cLWAQ==',key_name='tempest-TransferEncryptedVolumeTest-1822701009',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:29:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-q835wci7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:29:31Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=d8a8daad-7d66-42a9-b701-b191ca68564e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.096 239460 DEBUG nova.network.os_vif_util [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "address": "fa:16:3e:61:34:a1", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0301c38-d2", "ovs_interfaceid": "e0301c38-d2c5-4766-8243-5fb16ad5b084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.097 239460 DEBUG nova.network.os_vif_util [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=e0301c38-d2c5-4766-8243-5fb16ad5b084,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0301c38-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.097 239460 DEBUG os_vif [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=e0301c38-d2c5-4766-8243-5fb16ad5b084,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0301c38-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.099 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.099 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0301c38-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.100 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.101 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.103 239460 INFO os_vif [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:34:a1,bridge_name='br-int',has_traffic_filtering=True,id=e0301c38-d2c5-4766-8243-5fb16ad5b084,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0301c38-d2')#033[00m
Jan 29 12:30:08 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[265912]: [NOTICE]   (265916) : haproxy version is 2.8.14-c23fe91
Jan 29 12:30:08 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[265912]: [NOTICE]   (265916) : path to executable is /usr/sbin/haproxy
Jan 29 12:30:08 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[265912]: [WARNING]  (265916) : Exiting Master process...
Jan 29 12:30:08 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[265912]: [ALERT]    (265916) : Current worker (265918) exited with code 143 (Terminated)
Jan 29 12:30:08 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[265912]: [WARNING]  (265916) : All workers exited. Exiting... (0)
Jan 29 12:30:08 np0005601226 systemd[1]: libpod-a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9.scope: Deactivated successfully.
Jan 29 12:30:08 np0005601226 podman[266886]: 2026-01-29 17:30:08.115440079 +0000 UTC m=+0.203678315 container died a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 29 12:30:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Jan 29 12:30:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Jan 29 12:30:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Jan 29 12:30:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9-userdata-shm.mount: Deactivated successfully.
Jan 29 12:30:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-908cae5bac2f4cbdf6989165e0512201b7f3c10ebdfe4e04c269f0af615cfca7-merged.mount: Deactivated successfully.
Jan 29 12:30:08 np0005601226 podman[266886]: 2026-01-29 17:30:08.712583453 +0000 UTC m=+0.800821689 container cleanup a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:30:08 np0005601226 systemd[1]: libpod-conmon-a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9.scope: Deactivated successfully.
Jan 29 12:30:08 np0005601226 podman[266945]: 2026-01-29 17:30:08.908093097 +0000 UTC m=+0.180197522 container remove a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.912 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ba04b624-3bad-4958-adcf-2be952fdd6da]: (4, ('Thu Jan 29 05:30:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9)\na6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9\nThu Jan 29 05:30:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (a6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9)\na6ad19b8c7db7e6f873445747c962c55f1da44c1093afac2b9e29e003a8381d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.914 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd2ad33-aa1b-4cbd-8cb8-1b216de5c9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.915 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.916 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:08 np0005601226 kernel: tap25cf1715-f0: left promiscuous mode
Jan 29 12:30:08 np0005601226 nova_compute[239456]: 2026-01-29 17:30:08.928 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.931 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0032ddc2-a848-46f9-afff-bd785d0671be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.944 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ce201076-8367-456f-9c1c-489b35dcaffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.945 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[24c1fa97-657c-47b8-b393-329e53adb222]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.960 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5be678c8-2472-4ff8-9a83-1658e3253836]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512058, 'reachable_time': 15658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266960, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:08 np0005601226 systemd[1]: run-netns-ovnmeta\x2d25cf1715\x2df178\x2d4f65\x2dbe7c\x2dcf203c28f072.mount: Deactivated successfully.
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.963 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:30:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:08.963 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[3341ff4a-df70-47b2-93c0-a615fb7f1f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:09 np0005601226 nova_compute[239456]: 2026-01-29 17:30:09.106 239460 INFO nova.virt.libvirt.driver [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Deleting instance files /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e_del#033[00m
Jan 29 12:30:09 np0005601226 nova_compute[239456]: 2026-01-29 17:30:09.107 239460 INFO nova.virt.libvirt.driver [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Deletion of /var/lib/nova/instances/d8a8daad-7d66-42a9-b701-b191ca68564e_del complete#033[00m
Jan 29 12:30:09 np0005601226 nova_compute[239456]: 2026-01-29 17:30:09.175 239460 INFO nova.compute.manager [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Took 1.92 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:30:09 np0005601226 nova_compute[239456]: 2026-01-29 17:30:09.175 239460 DEBUG oslo.service.loopingcall [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:30:09 np0005601226 nova_compute[239456]: 2026-01-29 17:30:09.176 239460 DEBUG nova.compute.manager [-] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:30:09 np0005601226 nova_compute[239456]: 2026-01-29 17:30:09.176 239460 DEBUG nova.network.neutron [-] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:30:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 270 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 16 KiB/s wr, 103 op/s
Jan 29 12:30:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:10.152 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:30:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:10.152 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.154 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.285 239460 DEBUG nova.network.neutron [-] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.306 239460 INFO nova.compute.manager [-] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Took 1.13 seconds to deallocate network for instance.#033[00m
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.382 239460 DEBUG nova.compute.manager [req-b4289857-8064-4d15-8e01-ca35d89a82dd req-5d698e2a-0d7d-4f77-b172-16bf13d134d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Received event network-vif-deleted-e0301c38-d2c5-4766-8243-5fb16ad5b084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.515 239460 INFO nova.compute.manager [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:30:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:30:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:30:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:30:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.585 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.586 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:30:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:30:10 np0005601226 nova_compute[239456]: 2026-01-29 17:30:10.676 239460 DEBUG oslo_concurrency.processutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:11 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280633932' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:11 np0005601226 nova_compute[239456]: 2026-01-29 17:30:11.194 239460 DEBUG oslo_concurrency.processutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:11 np0005601226 nova_compute[239456]: 2026-01-29 17:30:11.200 239460 DEBUG nova.compute.provider_tree [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:30:11 np0005601226 nova_compute[239456]: 2026-01-29 17:30:11.216 239460 DEBUG nova.scheduler.client.report [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:30:11 np0005601226 nova_compute[239456]: 2026-01-29 17:30:11.236 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:11 np0005601226 nova_compute[239456]: 2026-01-29 17:30:11.271 239460 INFO nova.scheduler.client.report [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Deleted allocations for instance d8a8daad-7d66-42a9-b701-b191ca68564e#033[00m
Jan 29 12:30:11 np0005601226 nova_compute[239456]: 2026-01-29 17:30:11.341 239460 DEBUG oslo_concurrency.lockutils [None req-8d46a486-9eea-4653-9308-b470494bd80c 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "d8a8daad-7d66-42a9-b701-b191ca68564e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:11 np0005601226 nova_compute[239456]: 2026-01-29 17:30:11.483 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 270 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 23 KiB/s wr, 91 op/s
Jan 29 12:30:12 np0005601226 nova_compute[239456]: 2026-01-29 17:30:12.188 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:13 np0005601226 nova_compute[239456]: 2026-01-29 17:30:13.102 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:13 np0005601226 nova_compute[239456]: 2026-01-29 17:30:13.103 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707798.1023898, ff3dd15f-f585-4406-8c70-96be2a8945a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:13 np0005601226 nova_compute[239456]: 2026-01-29 17:30:13.104 239460 INFO nova.compute.manager [-] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:30:13 np0005601226 nova_compute[239456]: 2026-01-29 17:30:13.137 239460 DEBUG nova.compute.manager [None req-daca1748-3fb8-40ee-a279-e8eff7d87d97 - - - - - -] [instance: ff3dd15f-f585-4406-8c70-96be2a8945a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 270 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 20 KiB/s wr, 54 op/s
Jan 29 12:30:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:14.155 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Jan 29 12:30:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Jan 29 12:30:14 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Jan 29 12:30:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 270 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 21 KiB/s wr, 58 op/s
Jan 29 12:30:16 np0005601226 nova_compute[239456]: 2026-01-29 17:30:16.485 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Jan 29 12:30:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Jan 29 12:30:17 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Jan 29 12:30:17 np0005601226 nova_compute[239456]: 2026-01-29 17:30:17.398 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707802.3969264, 60a233ad-302a-45ea-a78c-31ff4f06919e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:17 np0005601226 nova_compute[239456]: 2026-01-29 17:30:17.398 239460 INFO nova.compute.manager [-] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:30:17 np0005601226 nova_compute[239456]: 2026-01-29 17:30:17.423 239460 DEBUG nova.compute.manager [None req-0b620c17-7267-4b89-8601-ad1455e8be83 - - - - - -] [instance: 60a233ad-302a-45ea-a78c-31ff4f06919e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 270 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 9.6 KiB/s wr, 36 op/s
Jan 29 12:30:18 np0005601226 nova_compute[239456]: 2026-01-29 17:30:18.104 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 270 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 KiB/s wr, 78 op/s
Jan 29 12:30:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Jan 29 12:30:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Jan 29 12:30:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Jan 29 12:30:21 np0005601226 nova_compute[239456]: 2026-01-29 17:30:21.488 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:30:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/43763497' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:30:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:30:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/43763497' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:30:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 283 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 657 KiB/s wr, 91 op/s
Jan 29 12:30:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:30:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2348335856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:30:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:30:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2348335856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:30:22 np0005601226 nova_compute[239456]: 2026-01-29 17:30:22.740 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:22 np0005601226 nova_compute[239456]: 2026-01-29 17:30:22.741 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:22 np0005601226 nova_compute[239456]: 2026-01-29 17:30:22.760 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.003 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.003 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.009 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.010 239460 INFO nova.compute.claims [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.078 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707808.0780263, d8a8daad-7d66-42a9-b701-b191ca68564e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.078 239460 INFO nova.compute.manager [-] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.094 239460 DEBUG nova.compute.manager [None req-22aff188-dc83-4e4d-b291-af1c0e4b4b91 - - - - - -] [instance: d8a8daad-7d66-42a9-b701-b191ca68564e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.095 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.111 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4232073106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.600 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.607 239460 DEBUG nova.compute.provider_tree [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.626 239460 DEBUG nova.scheduler.client.report [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.655 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.656 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.702 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.703 239460 DEBUG nova.network.neutron [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.726 239460 INFO nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.748 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.795 239460 INFO nova.virt.block_device [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Booting with volume 980dbcd8-86dd-412e-92c6-97f0c6da44c6 at /dev/vda#033[00m
Jan 29 12:30:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 298 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 113 op/s
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.913 239460 DEBUG nova.policy [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4f278bc1afe946ca991a0203a74c5a7f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.966 239460 DEBUG os_brick.utils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.967 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.980 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.980 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[4a45c23b-7091-4a31-8f66-1c769d6514d4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.981 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.986 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.986 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[0870aef4-2f4b-4bb5-b9d1-7aefe56ebf96]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.988 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.992 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.993 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[347cbf2e-ca55-4374-966d-37f574fbb581]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.994 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[3e77e5ad-f5e0-405a-8dcc-744ab8e7fe66]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:23 np0005601226 nova_compute[239456]: 2026-01-29 17:30:23.994 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.009 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.011 239460 DEBUG os_brick.initiator.connectors.lightos [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.011 239460 DEBUG os_brick.initiator.connectors.lightos [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.012 239460 DEBUG os_brick.initiator.connectors.lightos [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.012 239460 DEBUG os_brick.utils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] <== get_connector_properties: return (45ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.012 239460 DEBUG nova.virt.block_device [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Updating existing volume attachment record: 0391e978-6bc8-4985-8b52-5deca4d564c8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.330 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.331 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.354 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.428 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.429 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.436 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.437 239460 INFO nova.compute.claims [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.604 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:24 np0005601226 nova_compute[239456]: 2026-01-29 17:30:24.631 239460 DEBUG nova.network.neutron [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Successfully created port: c8b9d9fc-1915-4db3-8869-f69770c88894 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:30:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:30:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3102388016' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.091 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.093 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.093 239460 INFO nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Creating image(s)#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.094 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.094 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Ensure instance console log exists: /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.094 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.095 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.095 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631019844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.133 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.138 239460 DEBUG nova.compute.provider_tree [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.158 239460 DEBUG nova.scheduler.client.report [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.177 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.178 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.219 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.219 239460 DEBUG nova.network.neutron [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.255 239460 INFO nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.272 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.310 239460 INFO nova.virt.block_device [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Booting with volume 374ea712-ba05-4bee-9c63-7609fdf31eb9 at /dev/vda#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.394 239460 DEBUG nova.network.neutron [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Successfully updated port: c8b9d9fc-1915-4db3-8869-f69770c88894 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.410 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.410 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquired lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.410 239460 DEBUG nova.network.neutron [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.433 239460 DEBUG nova.policy [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3901089a059c4bdb8d0497398873d2f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '420f46ae230d4c529afe366a1b780921', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.448 239460 DEBUG os_brick.utils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.449 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.457 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.457 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[694f2cad-5b5b-4a3c-9e17-938af5824334]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.458 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.462 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.462 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ae095c-e7ea-483b-930e-44bd9bcd43b5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.464 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.470 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.470 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[ecfee68e-f13b-4876-9326-c3ff78e703d7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.471 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f10b6a-31c1-4571-b838-1d41f1127a79]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.471 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.490 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.491 239460 DEBUG os_brick.initiator.connectors.lightos [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.492 239460 DEBUG os_brick.initiator.connectors.lightos [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.492 239460 DEBUG os_brick.initiator.connectors.lightos [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.492 239460 DEBUG os_brick.utils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] <== get_connector_properties: return (43ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.493 239460 DEBUG nova.virt.block_device [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Updating existing volume attachment record: de56dd13-ef76-41ec-987f-89e4c8532fa4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.506 239460 DEBUG nova.compute.manager [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-changed-c8b9d9fc-1915-4db3-8869-f69770c88894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.506 239460 DEBUG nova.compute.manager [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Refreshing instance network info cache due to event network-changed-c8b9d9fc-1915-4db3-8869-f69770c88894. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.506 239460 DEBUG oslo_concurrency.lockutils [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:30:25 np0005601226 nova_compute[239456]: 2026-01-29 17:30:25.588 239460 DEBUG nova.network.neutron [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:30:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 156 op/s
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.193 239460 DEBUG nova.network.neutron [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Successfully created port: 923a704a-5e13-4a55-8741-5a8ed5669f0a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:30:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:30:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2647995505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.490 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.543 239460 DEBUG nova.network.neutron [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Updating instance_info_cache with network_info: [{"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.549 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.552 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.553 239460 INFO nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Creating image(s)#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.554 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.554 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Ensure instance console log exists: /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.555 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.555 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.556 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.566 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Releasing lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.567 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Instance network_info: |[{"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.568 239460 DEBUG oslo_concurrency.lockutils [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.568 239460 DEBUG nova.network.neutron [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Refreshing network info cache for port c8b9d9fc-1915-4db3-8869-f69770c88894 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.573 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Start _get_guest_xml network_info=[{"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '0391e978-6bc8-4985-8b52-5deca4d564c8', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '4c4d76ac-3711-4858-90a1-7e43dc5ff7e4', 'attached_at': '', 'detached_at': '', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'serial': '980dbcd8-86dd-412e-92c6-97f0c6da44c6'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.579 239460 WARNING nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.584 239460 DEBUG nova.virt.libvirt.host [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.585 239460 DEBUG nova.virt.libvirt.host [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.588 239460 DEBUG nova.virt.libvirt.host [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.588 239460 DEBUG nova.virt.libvirt.host [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.589 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.589 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.590 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.590 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.591 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.591 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.591 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.592 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.592 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.592 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.592 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.593 239460 DEBUG nova.virt.hardware [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.621 239460 DEBUG nova.storage.rbd_utils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.627 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.819 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:26 np0005601226 podman[267081]: 2026-01-29 17:30:26.910446369 +0000 UTC m=+0.064765015 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:30:26 np0005601226 podman[267082]: 2026-01-29 17:30:26.962677675 +0000 UTC m=+0.113150197 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 29 12:30:26 np0005601226 nova_compute[239456]: 2026-01-29 17:30:26.972 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:30:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050435855' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.168 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Jan 29 12:30:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Jan 29 12:30:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.281 239460 DEBUG os_brick.encryptors [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Using volume encryption metadata '{'encryption_key_id': '5211cffa-ab9e-4d07-8f51-26accb154594', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '4c4d76ac-3711-4858-90a1-7e43dc5ff7e4', 'attached_at': '', 'detached_at': '', 'volume_id': '980dbcd8-86dd-412e-92c6-97f0c6da44c6', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.283 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.295 239460 DEBUG barbicanclient.v1.secrets [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/5211cffa-ab9e-4d07-8f51-26accb154594 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.296 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:30:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 24K writes, 98K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 8593 syncs, 2.90 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 14K writes, 53K keys, 14K commit groups, 1.0 writes per commit group, ingest: 36.61 MB, 0.06 MB/s#012Interval WAL: 14K writes, 5810 syncs, 2.47 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.316 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.317 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.319 239460 DEBUG nova.network.neutron [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Successfully updated port: 923a704a-5e13-4a55-8741-5a8ed5669f0a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.339 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.339 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquired lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.339 239460 DEBUG nova.network.neutron [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.346 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.346 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.372 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.373 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.396 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.396 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.415 239460 DEBUG nova.compute.manager [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-changed-923a704a-5e13-4a55-8741-5a8ed5669f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.416 239460 DEBUG nova.compute.manager [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Refreshing instance network info cache due to event network-changed-923a704a-5e13-4a55-8741-5a8ed5669f0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.416 239460 DEBUG oslo_concurrency.lockutils [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.419 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.419 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.458 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.458 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.488 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.489 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.512 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.513 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.531 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.532 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.548 239460 DEBUG nova.network.neutron [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.552 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.553 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.579 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.580 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.603 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.603 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.632 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.633 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.657 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.657 239460 INFO barbicanclient.base [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/5211cffa-ab9e-4d07-8f51-26accb154594#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.687 239460 DEBUG barbicanclient.client [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.688 239460 DEBUG nova.virt.libvirt.host [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <volume>980dbcd8-86dd-412e-92c6-97f0c6da44c6</volume>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:30:27 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:30:27 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.714 239460 DEBUG nova.virt.libvirt.vif [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1788232795',display_name='tempest-TransferEncryptedVolumeTest-server-1788232795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1788232795',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgD45Dm8MtAL32WS9smIaOmM8jSZyGBgHt0KuuP4tAN+PaFbPD2gY+bvOWoixBRmKRVNeRJWxYw4x1d/JqSF+Q3lf37438lc/Bafac9K9BPV+ZkjGBum9rZonwt+cLWAQ==',key_name='tempest-TransferEncryptedVolumeTest-1822701009',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-udgyo0iy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:30:23Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=4c4d76ac-3711-4858-90a1-7e43dc5ff7e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.714 239460 DEBUG nova.network.os_vif_util [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.715 239460 DEBUG nova.network.os_vif_util [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ae:b3,bridge_name='br-int',has_traffic_filtering=True,id=c8b9d9fc-1915-4db3-8869-f69770c88894,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8b9d9fc-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.716 239460 DEBUG nova.objects.instance [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.729 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <uuid>4c4d76ac-3711-4858-90a1-7e43dc5ff7e4</uuid>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <name>instance-00000014</name>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1788232795</nova:name>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:30:26</nova:creationTime>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:user uuid="4f278bc1afe946ca991a0203a74c5a7f">tempest-TransferEncryptedVolumeTest-1262552887-project-member</nova:user>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:project uuid="c74297072cc041019fc7ff4bff1a0f08">tempest-TransferEncryptedVolumeTest-1262552887</nova:project>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <nova:port uuid="c8b9d9fc-1915-4db3-8869-f69770c88894">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <entry name="serial">4c4d76ac-3711-4858-90a1-7e43dc5ff7e4</entry>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <entry name="uuid">4c4d76ac-3711-4858-90a1-7e43dc5ff7e4</entry>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_disk.config">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-980dbcd8-86dd-412e-92c6-97f0c6da44c6">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <serial>980dbcd8-86dd-412e-92c6-97f0c6da44c6</serial>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <encryption format="luks">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:        <secret type="passphrase" uuid="54b45289-4b0a-4813-9909-c2d0c739fd0d"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      </encryption>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:30:ae:b3"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <target dev="tapc8b9d9fc-19"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/console.log" append="off"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:30:27 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:30:27 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:30:27 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:30:27 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.730 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Preparing to wait for external event network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.731 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.731 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.731 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.732 239460 DEBUG nova.virt.libvirt.vif [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1788232795',display_name='tempest-TransferEncryptedVolumeTest-server-1788232795',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1788232795',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgD45Dm8MtAL32WS9smIaOmM8jSZyGBgHt0KuuP4tAN+PaFbPD2gY+bvOWoixBRmKRVNeRJWxYw4x1d/JqSF+Q3lf37438lc/Bafac9K9BPV+ZkjGBum9rZonwt+cLWAQ==',key_name='tempest-TransferEncryptedVolumeTest-1822701009',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-udgyo0iy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:30:23Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=4c4d76ac-3711-4858-90a1-7e43dc5ff7e4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.732 239460 DEBUG nova.network.os_vif_util [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.733 239460 DEBUG nova.network.os_vif_util [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ae:b3,bridge_name='br-int',has_traffic_filtering=True,id=c8b9d9fc-1915-4db3-8869-f69770c88894,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8b9d9fc-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.733 239460 DEBUG os_vif [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ae:b3,bridge_name='br-int',has_traffic_filtering=True,id=c8b9d9fc-1915-4db3-8869-f69770c88894,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8b9d9fc-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.734 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.735 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.735 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.740 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.740 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8b9d9fc-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.740 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8b9d9fc-19, col_values=(('external_ids', {'iface-id': 'c8b9d9fc-1915-4db3-8869-f69770c88894', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ae:b3', 'vm-uuid': '4c4d76ac-3711-4858-90a1-7e43dc5ff7e4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.741 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:27 np0005601226 NetworkManager[49020]: <info>  [1769707827.7429] manager: (tapc8b9d9fc-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.745 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.746 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.747 239460 INFO os_vif [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ae:b3,bridge_name='br-int',has_traffic_filtering=True,id=c8b9d9fc-1915-4db3-8869-f69770c88894,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8b9d9fc-19')#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.799 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.800 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.800 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No VIF found with MAC fa:16:3e:30:ae:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.801 239460 INFO nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Using config drive#033[00m
Jan 29 12:30:27 np0005601226 nova_compute[239456]: 2026-01-29 17:30:27.818 239460 DEBUG nova.storage.rbd_utils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:30:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 2.7 MiB/s wr, 124 op/s
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.232 239460 INFO nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Creating config drive at /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/disk.config#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.237 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20ywmt8q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.362 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp20ywmt8q" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.382 239460 DEBUG nova.storage.rbd_utils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.384 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/disk.config 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.404 239460 DEBUG nova.network.neutron [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Updated VIF entry in instance network info cache for port c8b9d9fc-1915-4db3-8869-f69770c88894. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.405 239460 DEBUG nova.network.neutron [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Updating instance_info_cache with network_info: [{"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.419 239460 DEBUG oslo_concurrency.lockutils [req-c248d204-06c8-4a73-b048-7c1830da7155 req-b1a9c6dc-c3ea-40bf-9573-4ff086b5076f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.500 239460 DEBUG oslo_concurrency.processutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/disk.config 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.500 239460 INFO nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Deleting local config drive /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4/disk.config because it was imported into RBD.#033[00m
Jan 29 12:30:28 np0005601226 kernel: tapc8b9d9fc-19: entered promiscuous mode
Jan 29 12:30:28 np0005601226 NetworkManager[49020]: <info>  [1769707828.5438] manager: (tapc8b9d9fc-19): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.551 239460 DEBUG nova.network.neutron [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Updating instance_info_cache with network_info: [{"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.575 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Releasing lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.575 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Instance network_info: |[{"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.576 239460 DEBUG oslo_concurrency.lockutils [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.576 239460 DEBUG nova.network.neutron [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Refreshing network info cache for port 923a704a-5e13-4a55-8741-5a8ed5669f0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:30:28 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:28Z|00187|binding|INFO|Claiming lport c8b9d9fc-1915-4db3-8869-f69770c88894 for this chassis.
Jan 29 12:30:28 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:28Z|00188|binding|INFO|c8b9d9fc-1915-4db3-8869-f69770c88894: Claiming fa:16:3e:30:ae:b3 10.100.0.7
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.579 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Start _get_guest_xml network_info=[{"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': 'de56dd13-ef76-41ec-987f-89e4c8532fa4', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-374ea712-ba05-4bee-9c63-7609fdf31eb9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '374ea712-ba05-4bee-9c63-7609fdf31eb9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a25c53a6-69cb-4591-96a0-ba339283350e', 'attached_at': '', 'detached_at': '', 'volume_id': '374ea712-ba05-4bee-9c63-7609fdf31eb9', 'serial': '374ea712-ba05-4bee-9c63-7609fdf31eb9'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.580 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.587 239460 WARNING nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.593 239460 DEBUG nova.virt.libvirt.host [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.594 239460 DEBUG nova.virt.libvirt.host [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.599 239460 DEBUG nova.virt.libvirt.host [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.599 239460 DEBUG nova.virt.libvirt.host [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.600 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:30:28 np0005601226 systemd-machined[207561]: New machine qemu-20-instance-00000014.
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.600 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.601 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.601 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.601 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.601 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.602 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.602 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.602 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.603 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.603 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.603 239460 DEBUG nova.virt.hardware [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:30:28 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:28Z|00189|binding|INFO|Setting lport c8b9d9fc-1915-4db3-8869-f69770c88894 ovn-installed in OVS
Jan 29 12:30:28 np0005601226 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Jan 29 12:30:28 np0005601226 systemd-udevd[267209]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.627 239460 DEBUG nova.storage.rbd_utils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image a25c53a6-69cb-4591-96a0-ba339283350e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:30:28 np0005601226 NetworkManager[49020]: <info>  [1769707828.6311] device (tapc8b9d9fc-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.631 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:28 np0005601226 NetworkManager[49020]: <info>  [1769707828.6319] device (tapc8b9d9fc-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:30:28 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:28Z|00190|binding|INFO|Setting lport c8b9d9fc-1915-4db3-8869-f69770c88894 up in Southbound
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.646 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ae:b3 10.100.0.7'], port_security=['fa:16:3e:30:ae:b3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4c4d76ac-3711-4858-90a1-7e43dc5ff7e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45c1bbdb-777c-4906-ac59-7f4e97f55f2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=c8b9d9fc-1915-4db3-8869-f69770c88894) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.647 155625 INFO neutron.agent.ovn.metadata.agent [-] Port c8b9d9fc-1915-4db3-8869-f69770c88894 in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 bound to our chassis#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.649 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25cf1715-f178-4f65-be7c-cf203c28f072#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.650 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.658 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[35b4e095-4d7d-4ec7-867e-932ea5d7ac3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.659 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25cf1715-f1 in ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.661 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25cf1715-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.661 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[589f144a-a893-4d61-9dc8-a4186b6d138f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.662 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcb1918-5768-470f-a5ed-63298f6bd9cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.672 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[8a55a0c2-5b4f-4b8a-a463-3265c13ba107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.684 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8dbab549-899e-4bf6-8309-9acecaa53914]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.705 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[827ea11b-1836-4476-80dc-190bc5f32e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.708 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[cc34c3c2-3970-4972-8840-8007b973a2fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 systemd-udevd[267219]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:30:28 np0005601226 NetworkManager[49020]: <info>  [1769707828.7099] manager: (tap25cf1715-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/104)
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.732 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b50af7-2ed7-4991-af4c-578ec0693581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.735 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[241823cf-08ec-4884-adc4-ce38152c7918]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 NetworkManager[49020]: <info>  [1769707828.7510] device (tap25cf1715-f0): carrier: link connected
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.753 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb1716f-4e0e-4821-b1ee-5315efa7e44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.764 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0612e957-8a0f-4015-b67b-aa45fd37d6e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518225, 'reachable_time': 26559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267270, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.774 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d1710c8f-6ae8-4568-b4dc-a8cf1070a228]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:50ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518225, 'tstamp': 518225}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267271, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.785 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[21df6e11-66d2-4ba4-8503-e0351d46db28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518225, 'reachable_time': 26559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267272, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.802 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[87de70a3-0daf-4e19-ba5f-67142a93e19a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.843 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1171acd7-1d67-41e3-827e-a2570156423d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.844 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.845 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.845 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25cf1715-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:28 np0005601226 NetworkManager[49020]: <info>  [1769707828.8474] manager: (tap25cf1715-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Jan 29 12:30:28 np0005601226 kernel: tap25cf1715-f0: entered promiscuous mode
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.848 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.850 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25cf1715-f0, col_values=(('external_ids', {'iface-id': '82a91bf5-9093-4cbd-bfe4-f5d4b5400077'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:28 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:28Z|00191|binding|INFO|Releasing lport 82a91bf5-9093-4cbd-bfe4-f5d4b5400077 from this chassis (sb_readonly=0)
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.851 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.851 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.853 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.854 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4c59ac5e-d942-4d86-8550-af554f8072ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.855 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:30:28 np0005601226 nova_compute[239456]: 2026-01-29 17:30:28.856 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:28.857 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'env', 'PROCESS_TAG=haproxy-25cf1715-f178-4f65-be7c-cf203c28f072', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25cf1715-f178-4f65-be7c-cf203c28f072.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:30:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:30:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/830213028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.167 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:29 np0005601226 podman[267340]: 2026-01-29 17:30:29.185581677 +0000 UTC m=+0.048865867 container create 6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 29 12:30:29 np0005601226 systemd[1]: Started libpod-conmon-6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4.scope.
Jan 29 12:30:29 np0005601226 podman[267340]: 2026-01-29 17:30:29.158455197 +0000 UTC m=+0.021739477 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:30:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/254376857c4901c5c93c381496cb6f6d4c387bc22840b54a8d323f02fae9b517/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:29 np0005601226 podman[267340]: 2026-01-29 17:30:29.281124939 +0000 UTC m=+0.144409149 container init 6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:30:29 np0005601226 podman[267340]: 2026-01-29 17:30:29.286674568 +0000 UTC m=+0.149958758 container start 6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:30:29 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [NOTICE]   (267361) : New worker (267363) forked
Jan 29 12:30:29 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [NOTICE]   (267361) : Loading success.
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.360 239460 DEBUG nova.virt.libvirt.vif [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-693300434',display_name='tempest-TestVolumeBootPattern-server-693300434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-693300434',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-jfp80n0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:30:25Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=a25c53a6-69cb-4591-96a0-ba339283350e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.360 239460 DEBUG nova.network.os_vif_util [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.361 239460 DEBUG nova.network.os_vif_util [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:13:7d,bridge_name='br-int',has_traffic_filtering=True,id=923a704a-5e13-4a55-8741-5a8ed5669f0a,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a704a-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.362 239460 DEBUG nova.objects.instance [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'pci_devices' on Instance uuid a25c53a6-69cb-4591-96a0-ba339283350e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.511 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <uuid>a25c53a6-69cb-4591-96a0-ba339283350e</uuid>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <name>instance-00000015</name>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBootPattern-server-693300434</nova:name>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:30:28</nova:creationTime>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:user uuid="3901089a059c4bdb8d0497398873d2f1">tempest-TestVolumeBootPattern-1871389491-project-member</nova:user>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:project uuid="420f46ae230d4c529afe366a1b780921">tempest-TestVolumeBootPattern-1871389491</nova:project>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <nova:port uuid="923a704a-5e13-4a55-8741-5a8ed5669f0a">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <entry name="serial">a25c53a6-69cb-4591-96a0-ba339283350e</entry>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <entry name="uuid">a25c53a6-69cb-4591-96a0-ba339283350e</entry>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/a25c53a6-69cb-4591-96a0-ba339283350e_disk.config">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-374ea712-ba05-4bee-9c63-7609fdf31eb9">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <serial>374ea712-ba05-4bee-9c63-7609fdf31eb9</serial>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:0f:13:7d"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <target dev="tap923a704a-5e"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/console.log" append="off"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:30:29 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:30:29 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:30:29 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:30:29 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.511 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Preparing to wait for external event network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.512 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.524 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.524 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.525 239460 DEBUG nova.virt.libvirt.vif [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-693300434',display_name='tempest-TestVolumeBootPattern-server-693300434',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-693300434',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-jfp80n0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:30:25Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=a25c53a6-69cb-4591-96a0-ba339283350e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.525 239460 DEBUG nova.network.os_vif_util [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.526 239460 DEBUG nova.network.os_vif_util [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:13:7d,bridge_name='br-int',has_traffic_filtering=True,id=923a704a-5e13-4a55-8741-5a8ed5669f0a,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a704a-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.526 239460 DEBUG os_vif [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:13:7d,bridge_name='br-int',has_traffic_filtering=True,id=923a704a-5e13-4a55-8741-5a8ed5669f0a,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a704a-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.526 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.527 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.527 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.529 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.529 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap923a704a-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.530 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap923a704a-5e, col_values=(('external_ids', {'iface-id': '923a704a-5e13-4a55-8741-5a8ed5669f0a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:13:7d', 'vm-uuid': 'a25c53a6-69cb-4591-96a0-ba339283350e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.531 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:29 np0005601226 NetworkManager[49020]: <info>  [1769707829.5325] manager: (tap923a704a-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.533 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.537 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.537 239460 INFO os_vif [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:13:7d,bridge_name='br-int',has_traffic_filtering=True,id=923a704a-5e13-4a55-8741-5a8ed5669f0a,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a704a-5e')#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.640 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.641 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.641 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No VIF found with MAC fa:16:3e:0f:13:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.642 239460 INFO nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Using config drive#033[00m
Jan 29 12:30:29 np0005601226 nova_compute[239456]: 2026-01-29 17:30:29.662 239460 DEBUG nova.storage.rbd_utils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image a25c53a6-69cb-4591-96a0-ba339283350e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:30:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 2.0 MiB/s wr, 89 op/s
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.250 239460 INFO nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Creating config drive at /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/disk.config#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.256 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv3t60ypw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.379 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv3t60ypw" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.403 239460 DEBUG nova.storage.rbd_utils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image a25c53a6-69cb-4591-96a0-ba339283350e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.407 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/disk.config a25c53a6-69cb-4591-96a0-ba339283350e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.485 239460 DEBUG nova.compute.manager [req-0d1db294-d7c2-48cc-9c8a-80dc8b52115f req-11ea0c16-b232-4446-91ff-33d2c9cd207d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.486 239460 DEBUG oslo_concurrency.lockutils [req-0d1db294-d7c2-48cc-9c8a-80dc8b52115f req-11ea0c16-b232-4446-91ff-33d2c9cd207d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.487 239460 DEBUG oslo_concurrency.lockutils [req-0d1db294-d7c2-48cc-9c8a-80dc8b52115f req-11ea0c16-b232-4446-91ff-33d2c9cd207d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.487 239460 DEBUG oslo_concurrency.lockutils [req-0d1db294-d7c2-48cc-9c8a-80dc8b52115f req-11ea0c16-b232-4446-91ff-33d2c9cd207d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.487 239460 DEBUG nova.compute.manager [req-0d1db294-d7c2-48cc-9c8a-80dc8b52115f req-11ea0c16-b232-4446-91ff-33d2c9cd207d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Processing event network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.522 239460 DEBUG oslo_concurrency.processutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/disk.config a25c53a6-69cb-4591-96a0-ba339283350e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.524 239460 INFO nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Deleting local config drive /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e/disk.config because it was imported into RBD.#033[00m
Jan 29 12:30:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:30:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3415206622' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:30:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:30:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3415206622' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:30:30 np0005601226 NetworkManager[49020]: <info>  [1769707830.5492] manager: (tap923a704a-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Jan 29 12:30:30 np0005601226 kernel: tap923a704a-5e: entered promiscuous mode
Jan 29 12:30:30 np0005601226 systemd-udevd[267243]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:30:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:30Z|00192|binding|INFO|Claiming lport 923a704a-5e13-4a55-8741-5a8ed5669f0a for this chassis.
Jan 29 12:30:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:30Z|00193|binding|INFO|923a704a-5e13-4a55-8741-5a8ed5669f0a: Claiming fa:16:3e:0f:13:7d 10.100.0.13
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.551 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.554 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:30 np0005601226 NetworkManager[49020]: <info>  [1769707830.5603] device (tap923a704a-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:30:30 np0005601226 NetworkManager[49020]: <info>  [1769707830.5610] device (tap923a704a-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:30:30 np0005601226 systemd-machined[207561]: New machine qemu-21-instance-00000015.
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.576 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:30Z|00194|binding|INFO|Setting lport 923a704a-5e13-4a55-8741-5a8ed5669f0a ovn-installed in OVS
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.579 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:30 np0005601226 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:30:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:30Z|00195|binding|INFO|Setting lport 923a704a-5e13-4a55-8741-5a8ed5669f0a up in Southbound
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.628 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:13:7d 10.100.0.13'], port_security=['fa:16:3e:0f:13:7d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a25c53a6-69cb-4591-96a0-ba339283350e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cfba344-fbfc-404d-872d-d297b528124f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=923a704a-5e13-4a55-8741-5a8ed5669f0a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.629 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 923a704a-5e13-4a55-8741-5a8ed5669f0a in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b bound to our chassis#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.631 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.640 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c3fd5d50-eb19-4be0-9397-80056e2614ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.640 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c08c304-21 in ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.642 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c08c304-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.642 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4c992f89-7cf9-48ab-82fe-716e6126fed8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.643 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[208fecf0-f6e7-4222-8c68-9bd5b11532ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.651 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[b71d0cca-59de-4b7f-b778-bde31afc3816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.661 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[54e5a17d-12ed-43fd-8b2f-5060e05290b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.678 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[995e5349-6ca4-4708-ad9a-026f65eab2cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 NetworkManager[49020]: <info>  [1769707830.6842] manager: (tap3c08c304-20): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.685 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[82f345be-d021-4720-993b-fae76dac77a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.707 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[15fc9ad7-f108-496a-a13e-ba607ea0a951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.710 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[8188cd88-ab5c-4520-8194-d6de9e86f095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 NetworkManager[49020]: <info>  [1769707830.7271] device (tap3c08c304-20): carrier: link connected
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.731 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[9088ee09-0103-4b79-8408-c2b027182efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.744 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[12b75cb6-39b9-4074-b3c5-453fc1c6d2f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518423, 'reachable_time': 33589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267461, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.754 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea123a6-dd2c-4973-9fe6-331ebea39fb4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:51ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518423, 'tstamp': 518423}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267462, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.763 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4b3496-79a7-463a-8ab4-368fb9f6496f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518423, 'reachable_time': 33589, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267463, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.786 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e7cfc4a5-c6af-46f6-bd2f-b7fa82ed330d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.827 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f58d77-8e78-4062-b8a8-7a032700d4b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.828 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.829 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.829 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.831 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:30 np0005601226 NetworkManager[49020]: <info>  [1769707830.8317] manager: (tap3c08c304-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 29 12:30:30 np0005601226 kernel: tap3c08c304-20: entered promiscuous mode
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.836 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:30 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:30Z|00196|binding|INFO|Releasing lport 4f9b16f1-6965-486d-bc02-ab1e4969963e from this chassis (sb_readonly=0)
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.837 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.839 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.839 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[9c29c756-17d8-462b-87af-d07215164987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.840 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:30:30 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:30.841 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'env', 'PROCESS_TAG=haproxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c08c304-2b32-4b44-ac2b-279bb8b2403b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.842 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.968 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707830.9685373, a25c53a6-69cb-4591-96a0-ba339283350e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:30 np0005601226 nova_compute[239456]: 2026-01-29 17:30:30.969 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] VM Started (Lifecycle Event)#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.071 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.074 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707830.96934, a25c53a6-69cb-4591-96a0-ba339283350e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.075 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:30:31 np0005601226 podman[267537]: 2026-01-29 17:30:31.115780989 +0000 UTC m=+0.035361623 container create e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 29 12:30:31 np0005601226 systemd[1]: Started libpod-conmon-e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e.scope.
Jan 29 12:30:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:30:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29533b922dbfe27097e890fee4f1f582095164809d216727a0179e771220847a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.167 239460 DEBUG nova.compute.manager [req-e77fa0e8-2b84-4f2d-a212-f58491c0fa6c req-268c1551-e2bd-47ce-81b9-1d59075c713a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.167 239460 DEBUG oslo_concurrency.lockutils [req-e77fa0e8-2b84-4f2d-a212-f58491c0fa6c req-268c1551-e2bd-47ce-81b9-1d59075c713a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.168 239460 DEBUG oslo_concurrency.lockutils [req-e77fa0e8-2b84-4f2d-a212-f58491c0fa6c req-268c1551-e2bd-47ce-81b9-1d59075c713a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.168 239460 DEBUG oslo_concurrency.lockutils [req-e77fa0e8-2b84-4f2d-a212-f58491c0fa6c req-268c1551-e2bd-47ce-81b9-1d59075c713a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.168 239460 DEBUG nova.compute.manager [req-e77fa0e8-2b84-4f2d-a212-f58491c0fa6c req-268c1551-e2bd-47ce-81b9-1d59075c713a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Processing event network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.169 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:30:31 np0005601226 podman[267537]: 2026-01-29 17:30:31.173160774 +0000 UTC m=+0.092741408 container init e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.176 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.179 239460 INFO nova.virt.libvirt.driver [-] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Instance spawned successfully.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.180 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:30:31 np0005601226 podman[267537]: 2026-01-29 17:30:31.181006615 +0000 UTC m=+0.100587249 container start e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:30:31 np0005601226 podman[267537]: 2026-01-29 17:30:31.09688031 +0000 UTC m=+0.016460974 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:30:31 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [NOTICE]   (267561) : New worker (267563) forked
Jan 29 12:30:31 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [NOTICE]   (267561) : Loading success.
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.235 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.239 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707831.1726744, a25c53a6-69cb-4591-96a0-ba339283350e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.240 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.246 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.259 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.265 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.277 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.278 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.278 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.279 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.279 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.280 239460 DEBUG nova.virt.libvirt.driver [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.284 239460 INFO nova.virt.libvirt.driver [-] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Instance spawned successfully.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.284 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.286 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.323 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.328 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.329 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.330 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.330 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.331 239460 DEBUG nova.virt.libvirt.driver [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.334 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.335 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707831.2462826, 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.335 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] VM Started (Lifecycle Event)#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.368 239460 INFO nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Took 4.82 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.369 239460 DEBUG nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.421 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.424 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.483 239460 DEBUG nova.network.neutron [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Updated VIF entry in instance network info cache for port 923a704a-5e13-4a55-8741-5a8ed5669f0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.484 239460 DEBUG nova.network.neutron [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Updating instance_info_cache with network_info: [{"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.491 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.502 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.502 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707831.247124, 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.503 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.510 239460 INFO nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Took 6.42 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.511 239460 DEBUG nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.579 239460 INFO nova.compute.manager [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Took 7.17 seconds to build instance.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.583 239460 DEBUG oslo_concurrency.lockutils [req-92ba6aba-0250-4ec2-ae76-5390b9d8469c req-5dfb946a-f621-4964-ae9a-a4f505e6e72d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.654 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.658 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707831.248867, 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.658 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.713 239460 INFO nova.compute.manager [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Took 8.74 seconds to build instance.#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.756 239460 DEBUG oslo_concurrency.lockutils [None req-86eee589-3c0a-4ee6-957e-8694d5ce83d4 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.765 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.768 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:30:31 np0005601226 nova_compute[239456]: 2026-01-29 17:30:31.775 239460 DEBUG oslo_concurrency.lockutils [None req-24c00b55-f823-4aba-b1a9-475990633ee7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:30:31 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 24K writes, 101K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 8538 syncs, 2.89 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 50K keys, 12K commit groups, 1.0 writes per commit group, ingest: 30.11 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5265 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:30:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 1.7 MiB/s wr, 87 op/s
Jan 29 12:30:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:32 np0005601226 nova_compute[239456]: 2026-01-29 17:30:32.552 239460 DEBUG nova.compute.manager [req-41b12e69-7e24-4abc-b3d7-e09eee75e712 req-9f68868c-2be1-449e-a035-abcc8f932bc6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:32 np0005601226 nova_compute[239456]: 2026-01-29 17:30:32.552 239460 DEBUG oslo_concurrency.lockutils [req-41b12e69-7e24-4abc-b3d7-e09eee75e712 req-9f68868c-2be1-449e-a035-abcc8f932bc6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:32 np0005601226 nova_compute[239456]: 2026-01-29 17:30:32.553 239460 DEBUG oslo_concurrency.lockutils [req-41b12e69-7e24-4abc-b3d7-e09eee75e712 req-9f68868c-2be1-449e-a035-abcc8f932bc6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:32 np0005601226 nova_compute[239456]: 2026-01-29 17:30:32.553 239460 DEBUG oslo_concurrency.lockutils [req-41b12e69-7e24-4abc-b3d7-e09eee75e712 req-9f68868c-2be1-449e-a035-abcc8f932bc6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:32 np0005601226 nova_compute[239456]: 2026-01-29 17:30:32.553 239460 DEBUG nova.compute.manager [req-41b12e69-7e24-4abc-b3d7-e09eee75e712 req-9f68868c-2be1-449e-a035-abcc8f932bc6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] No waiting events found dispatching network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:30:32 np0005601226 nova_compute[239456]: 2026-01-29 17:30:32.553 239460 WARNING nova.compute.manager [req-41b12e69-7e24-4abc-b3d7-e09eee75e712 req-9f68868c-2be1-449e-a035-abcc8f932bc6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received unexpected event network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.627 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.629 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.630 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.631 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 108 op/s
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.970 239460 DEBUG nova.compute.manager [req-3fbb6d21-8b53-46bb-b2c5-d84cfdac4e5c req-a56a8d9a-1687-4620-bd3a-406516e0ccdc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.971 239460 DEBUG oslo_concurrency.lockutils [req-3fbb6d21-8b53-46bb-b2c5-d84cfdac4e5c req-a56a8d9a-1687-4620-bd3a-406516e0ccdc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.971 239460 DEBUG oslo_concurrency.lockutils [req-3fbb6d21-8b53-46bb-b2c5-d84cfdac4e5c req-a56a8d9a-1687-4620-bd3a-406516e0ccdc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.971 239460 DEBUG oslo_concurrency.lockutils [req-3fbb6d21-8b53-46bb-b2c5-d84cfdac4e5c req-a56a8d9a-1687-4620-bd3a-406516e0ccdc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.972 239460 DEBUG nova.compute.manager [req-3fbb6d21-8b53-46bb-b2c5-d84cfdac4e5c req-a56a8d9a-1687-4620-bd3a-406516e0ccdc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] No waiting events found dispatching network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:30:33 np0005601226 nova_compute[239456]: 2026-01-29 17:30:33.973 239460 WARNING nova.compute.manager [req-3fbb6d21-8b53-46bb-b2c5-d84cfdac4e5c req-a56a8d9a-1687-4620-bd3a-406516e0ccdc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received unexpected event network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a for instance with vm_state active and task_state None.#033[00m
Jan 29 12:30:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166916053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.175 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.260 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.261 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.266 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.266 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.417 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.418 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3995MB free_disk=59.98787659779191GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.418 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.419 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.503 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.505 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance a25c53a6-69cb-4591-96a0-ba339283350e actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.506 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.506 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.535 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:34 np0005601226 nova_compute[239456]: 2026-01-29 17:30:34.564 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525430514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:35 np0005601226 nova_compute[239456]: 2026-01-29 17:30:35.091 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:35 np0005601226 nova_compute[239456]: 2026-01-29 17:30:35.098 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:30:35 np0005601226 nova_compute[239456]: 2026-01-29 17:30:35.181 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:30:35 np0005601226 nova_compute[239456]: 2026-01-29 17:30:35.330 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:30:35 np0005601226 nova_compute[239456]: 2026-01-29 17:30:35.332 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 30 KiB/s wr, 177 op/s
Jan 29 12:30:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:30:36 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 18K writes, 80K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 6141 syncs, 3.06 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 26.74 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4119 syncs, 2.46 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:30:36 np0005601226 nova_compute[239456]: 2026-01-29 17:30:36.333 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:36 np0005601226 nova_compute[239456]: 2026-01-29 17:30:36.334 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:36 np0005601226 NetworkManager[49020]: <info>  [1769707836.5405] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 29 12:30:36 np0005601226 NetworkManager[49020]: <info>  [1769707836.5414] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 29 12:30:36 np0005601226 nova_compute[239456]: 2026-01-29 17:30:36.539 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:36 np0005601226 nova_compute[239456]: 2026-01-29 17:30:36.581 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:36Z|00197|binding|INFO|Releasing lport 82a91bf5-9093-4cbd-bfe4-f5d4b5400077 from this chassis (sb_readonly=0)
Jan 29 12:30:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:36Z|00198|binding|INFO|Releasing lport 4f9b16f1-6965-486d-bc02-ab1e4969963e from this chassis (sb_readonly=0)
Jan 29 12:30:36 np0005601226 nova_compute[239456]: 2026-01-29 17:30:36.597 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.206 239460 DEBUG nova.compute.manager [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-changed-c8b9d9fc-1915-4db3-8869-f69770c88894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.206 239460 DEBUG nova.compute.manager [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Refreshing instance network info cache due to event network-changed-c8b9d9fc-1915-4db3-8869-f69770c88894. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.207 239460 DEBUG oslo_concurrency.lockutils [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.207 239460 DEBUG oslo_concurrency.lockutils [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.207 239460 DEBUG nova.network.neutron [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Refreshing network info cache for port c8b9d9fc-1915-4db3-8869-f69770c88894 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:30:37 np0005601226 nova_compute[239456]: 2026-01-29 17:30:37.798 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:30:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 28 KiB/s wr, 165 op/s
Jan 29 12:30:38 np0005601226 nova_compute[239456]: 2026-01-29 17:30:38.352 239460 DEBUG nova.network.neutron [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Updated VIF entry in instance network info cache for port c8b9d9fc-1915-4db3-8869-f69770c88894. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:30:38 np0005601226 nova_compute[239456]: 2026-01-29 17:30:38.353 239460 DEBUG nova.network.neutron [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Updating instance_info_cache with network_info: [{"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:38 np0005601226 nova_compute[239456]: 2026-01-29 17:30:38.372 239460 DEBUG oslo_concurrency.lockutils [req-f1eb7656-eb75-4ea8-ab35-dcb5320ebcab req-b358e5b9-1946-4429-be2d-752c57797d70 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:30:38 np0005601226 nova_compute[239456]: 2026-01-29 17:30:38.793 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:39 np0005601226 nova_compute[239456]: 2026-01-29 17:30:39.289 239460 DEBUG nova.compute.manager [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-changed-923a704a-5e13-4a55-8741-5a8ed5669f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:39 np0005601226 nova_compute[239456]: 2026-01-29 17:30:39.290 239460 DEBUG nova.compute.manager [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Refreshing instance network info cache due to event network-changed-923a704a-5e13-4a55-8741-5a8ed5669f0a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:30:39 np0005601226 nova_compute[239456]: 2026-01-29 17:30:39.290 239460 DEBUG oslo_concurrency.lockutils [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:30:39 np0005601226 nova_compute[239456]: 2026-01-29 17:30:39.290 239460 DEBUG oslo_concurrency.lockutils [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:30:39 np0005601226 nova_compute[239456]: 2026-01-29 17:30:39.290 239460 DEBUG nova.network.neutron [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Refreshing network info cache for port 923a704a-5e13-4a55-8741-5a8ed5669f0a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:30:39 np0005601226 nova_compute[239456]: 2026-01-29 17:30:39.538 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:39 np0005601226 ceph-mgr[75527]: [devicehealth INFO root] Check health
Jan 29 12:30:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 29 12:30:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:40.291 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:40.292 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:40.292 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:30:40 np0005601226 nova_compute[239456]: 2026-01-29 17:30:40.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:30:40
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:30:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:30:41 np0005601226 nova_compute[239456]: 2026-01-29 17:30:41.185 239460 DEBUG nova.network.neutron [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Updated VIF entry in instance network info cache for port 923a704a-5e13-4a55-8741-5a8ed5669f0a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:30:41 np0005601226 nova_compute[239456]: 2026-01-29 17:30:41.186 239460 DEBUG nova.network.neutron [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Updating instance_info_cache with network_info: [{"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:41 np0005601226 nova_compute[239456]: 2026-01-29 17:30:41.204 239460 DEBUG oslo_concurrency.lockutils [req-70a60d36-bd6c-442f-829b-cef76ead5b66 req-2956c80d-c937-4872-822d-8db842f59054 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-a25c53a6-69cb-4591-96a0-ba339283350e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:30:41 np0005601226 nova_compute[239456]: 2026-01-29 17:30:41.581 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:41 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 29 12:30:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 317 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 13 KiB/s wr, 146 op/s
Jan 29 12:30:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:43 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:43Z|00038|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.7
Jan 29 12:30:43 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:43Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:30:ae:b3 10.100.0.7
Jan 29 12:30:43 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:43Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:13:7d 10.100.0.13
Jan 29 12:30:43 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:43Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:13:7d 10.100.0.13
Jan 29 12:30:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 321 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 599 KiB/s wr, 152 op/s
Jan 29 12:30:44 np0005601226 nova_compute[239456]: 2026-01-29 17:30:44.559 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Jan 29 12:30:46 np0005601226 nova_compute[239456]: 2026-01-29 17:30:46.584 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:47 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:47Z|00042|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.7
Jan 29 12:30:47 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:47Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:30:ae:b3 10.100.0.7
Jan 29 12:30:47 np0005601226 nova_compute[239456]: 2026-01-29 17:30:47.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:30:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 921 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Jan 29 12:30:48 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:48Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:ae:b3 10.100.0.7
Jan 29 12:30:48 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:48Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:ae:b3 10.100.0.7
Jan 29 12:30:49 np0005601226 nova_compute[239456]: 2026-01-29 17:30:49.561 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 928 KiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.5617506381566e-06 of space, bias 1.0, pg target 0.00196852519144698 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0036586884402154424 of space, bias 1.0, pg target 1.0976065320646327 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.351102996526103e-06 of space, bias 1.0, pg target 0.0010019797959613047 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670416693921247 of space, bias 1.0, pg target 0.19944545914824527 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4532401846311687e-06 of space, bias 4.0, pg target 0.0017380752608188777 quantized to 16 (current 16)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 29 12:30:51 np0005601226 nova_compute[239456]: 2026-01-29 17:30:51.632 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 928 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 29 12:30:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.505 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.506 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.506 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.507 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.507 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.508 239460 INFO nova.compute.manager [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Terminating instance#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.509 239460 DEBUG nova.compute.manager [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:30:53 np0005601226 kernel: tap923a704a-5e (unregistering): left promiscuous mode
Jan 29 12:30:53 np0005601226 NetworkManager[49020]: <info>  [1769707853.6282] device (tap923a704a-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:30:53 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:53Z|00199|binding|INFO|Releasing lport 923a704a-5e13-4a55-8741-5a8ed5669f0a from this chassis (sb_readonly=0)
Jan 29 12:30:53 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:53Z|00200|binding|INFO|Setting lport 923a704a-5e13-4a55-8741-5a8ed5669f0a down in Southbound
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.634 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:53 np0005601226 ovn_controller[145556]: 2026-01-29T17:30:53Z|00201|binding|INFO|Removing iface tap923a704a-5e ovn-installed in OVS
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.636 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.643 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:53.644 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:13:7d 10.100.0.13'], port_security=['fa:16:3e:0f:13:7d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a25c53a6-69cb-4591-96a0-ba339283350e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cfba344-fbfc-404d-872d-d297b528124f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=923a704a-5e13-4a55-8741-5a8ed5669f0a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:30:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:53.647 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 923a704a-5e13-4a55-8741-5a8ed5669f0a in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b unbound from our chassis#033[00m
Jan 29 12:30:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:53.649 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:30:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:53.650 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[49c8bad5-7205-4ce0-81ba-5b33e24e0971]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:53.651 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace which is not needed anymore#033[00m
Jan 29 12:30:53 np0005601226 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 29 12:30:53 np0005601226 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 12.384s CPU time.
Jan 29 12:30:53 np0005601226 systemd-machined[207561]: Machine qemu-21-instance-00000015 terminated.
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.734 239460 INFO nova.virt.libvirt.driver [-] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Instance destroyed successfully.#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.735 239460 DEBUG nova.objects.instance [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'resources' on Instance uuid a25c53a6-69cb-4591-96a0-ba339283350e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.749 239460 DEBUG nova.virt.libvirt.vif [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-693300434',display_name='tempest-TestVolumeBootPattern-server-693300434',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-693300434',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:30:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-jfp80n0q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:30:31Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=a25c53a6-69cb-4591-96a0-ba339283350e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.749 239460 DEBUG nova.network.os_vif_util [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "address": "fa:16:3e:0f:13:7d", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap923a704a-5e", "ovs_interfaceid": "923a704a-5e13-4a55-8741-5a8ed5669f0a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.750 239460 DEBUG nova.network.os_vif_util [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:13:7d,bridge_name='br-int',has_traffic_filtering=True,id=923a704a-5e13-4a55-8741-5a8ed5669f0a,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a704a-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.751 239460 DEBUG os_vif [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:13:7d,bridge_name='br-int',has_traffic_filtering=True,id=923a704a-5e13-4a55-8741-5a8ed5669f0a,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a704a-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.752 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.752 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap923a704a-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.753 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.754 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.757 239460 INFO os_vif [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:13:7d,bridge_name='br-int',has_traffic_filtering=True,id=923a704a-5e13-4a55-8741-5a8ed5669f0a,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap923a704a-5e')#033[00m
Jan 29 12:30:53 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [NOTICE]   (267561) : haproxy version is 2.8.14-c23fe91
Jan 29 12:30:53 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [NOTICE]   (267561) : path to executable is /usr/sbin/haproxy
Jan 29 12:30:53 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [WARNING]  (267561) : Exiting Master process...
Jan 29 12:30:53 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [WARNING]  (267561) : Exiting Master process...
Jan 29 12:30:53 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [ALERT]    (267561) : Current worker (267563) exited with code 143 (Terminated)
Jan 29 12:30:53 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[267552]: [WARNING]  (267561) : All workers exited. Exiting... (0)
Jan 29 12:30:53 np0005601226 systemd[1]: libpod-e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e.scope: Deactivated successfully.
Jan 29 12:30:53 np0005601226 podman[267644]: 2026-01-29 17:30:53.786277291 +0000 UTC m=+0.069525942 container died e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 12:30:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e-userdata-shm.mount: Deactivated successfully.
Jan 29 12:30:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay-29533b922dbfe27097e890fee4f1f582095164809d216727a0179e771220847a-merged.mount: Deactivated successfully.
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.917 239460 DEBUG nova.compute.manager [req-76e1e722-5dcc-4c7b-bc93-ae8c59d7ee6b req-11772621-4f4b-4883-a3f3-7abb80794f43 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-vif-unplugged-923a704a-5e13-4a55-8741-5a8ed5669f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.918 239460 DEBUG oslo_concurrency.lockutils [req-76e1e722-5dcc-4c7b-bc93-ae8c59d7ee6b req-11772621-4f4b-4883-a3f3-7abb80794f43 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.918 239460 DEBUG oslo_concurrency.lockutils [req-76e1e722-5dcc-4c7b-bc93-ae8c59d7ee6b req-11772621-4f4b-4883-a3f3-7abb80794f43 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.918 239460 DEBUG oslo_concurrency.lockutils [req-76e1e722-5dcc-4c7b-bc93-ae8c59d7ee6b req-11772621-4f4b-4883-a3f3-7abb80794f43 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.919 239460 DEBUG nova.compute.manager [req-76e1e722-5dcc-4c7b-bc93-ae8c59d7ee6b req-11772621-4f4b-4883-a3f3-7abb80794f43 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] No waiting events found dispatching network-vif-unplugged-923a704a-5e13-4a55-8741-5a8ed5669f0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:30:53 np0005601226 nova_compute[239456]: 2026-01-29 17:30:53.919 239460 DEBUG nova.compute.manager [req-76e1e722-5dcc-4c7b-bc93-ae8c59d7ee6b req-11772621-4f4b-4883-a3f3-7abb80794f43 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-vif-unplugged-923a704a-5e13-4a55-8741-5a8ed5669f0a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:30:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 928 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Jan 29 12:30:54 np0005601226 podman[267644]: 2026-01-29 17:30:54.002616095 +0000 UTC m=+0.285864726 container cleanup e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:30:54 np0005601226 systemd[1]: libpod-conmon-e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e.scope: Deactivated successfully.
Jan 29 12:30:54 np0005601226 podman[267702]: 2026-01-29 17:30:54.162349135 +0000 UTC m=+0.145520838 container remove e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.166 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2eab3888-5eb9-49fe-aff8-d1f371506ceb]: (4, ('Thu Jan 29 05:30:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e)\ne38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e\nThu Jan 29 05:30:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (e38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e)\ne38ef6adda0bd9f855bdd1443edf3385c713fd97663c04fb3fb155c1a895c12e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.169 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[81abd81b-f5ce-4003-b1ee-b46366248eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.170 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.172 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:54 np0005601226 kernel: tap3c08c304-20: left promiscuous mode
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.176 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.179 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[14c3fb78-eab4-4b37-ac94-8d5a19bf1ebc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.194 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[53c0ac90-bf3b-4a53-a510-946c9cf313e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.195 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3fe6ce-978a-4f8b-b05f-466a39377d3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.205 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[02e33bf2-d444-4384-b331-446aea9df37e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518417, 'reachable_time': 29776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267718, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.208 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:30:54 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:30:54.208 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b96dc8-8b7a-4bf2-8d5c-cfd608efac5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:30:54 np0005601226 systemd[1]: run-netns-ovnmeta\x2d3c08c304\x2d2b32\x2d4b44\x2dac2b\x2d279bb8b2403b.mount: Deactivated successfully.
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.453 239460 INFO nova.virt.libvirt.driver [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Deleting instance files /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e_del#033[00m
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.455 239460 INFO nova.virt.libvirt.driver [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Deletion of /var/lib/nova/instances/a25c53a6-69cb-4591-96a0-ba339283350e_del complete#033[00m
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.517 239460 INFO nova.compute.manager [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.517 239460 DEBUG oslo.service.loopingcall [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.520 239460 DEBUG nova.compute.manager [-] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:30:54 np0005601226 nova_compute[239456]: 2026-01-29 17:30:54.520 239460 DEBUG nova.network.neutron [-] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:30:55 np0005601226 nova_compute[239456]: 2026-01-29 17:30:55.373 239460 DEBUG nova.network.neutron [-] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:30:55 np0005601226 nova_compute[239456]: 2026-01-29 17:30:55.409 239460 INFO nova.compute.manager [-] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Took 0.89 seconds to deallocate network for instance.#033[00m
Jan 29 12:30:55 np0005601226 nova_compute[239456]: 2026-01-29 17:30:55.510 239460 DEBUG nova.compute.manager [req-e873d8fa-4693-464e-ab40-fc31951064a4 req-b65c21b6-3519-4015-a0b1-6da0c31cc5fd 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-vif-deleted-923a704a-5e13-4a55-8741-5a8ed5669f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:55 np0005601226 nova_compute[239456]: 2026-01-29 17:30:55.718 239460 INFO nova.compute.manager [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:30:55 np0005601226 nova_compute[239456]: 2026-01-29 17:30:55.766 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:55 np0005601226 nova_compute[239456]: 2026-01-29 17:30:55.766 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:55 np0005601226 nova_compute[239456]: 2026-01-29 17:30:55.833 239460 DEBUG oslo_concurrency.processutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:30:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 768 KiB/s rd, 1.6 MiB/s wr, 100 op/s
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.104 239460 DEBUG nova.compute.manager [req-6717044c-442f-492e-92aa-330005d3e7b3 req-fa139f0d-e028-4b3f-a2a4-1182246ddfb4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received event network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.104 239460 DEBUG oslo_concurrency.lockutils [req-6717044c-442f-492e-92aa-330005d3e7b3 req-fa139f0d-e028-4b3f-a2a4-1182246ddfb4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.104 239460 DEBUG oslo_concurrency.lockutils [req-6717044c-442f-492e-92aa-330005d3e7b3 req-fa139f0d-e028-4b3f-a2a4-1182246ddfb4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.105 239460 DEBUG oslo_concurrency.lockutils [req-6717044c-442f-492e-92aa-330005d3e7b3 req-fa139f0d-e028-4b3f-a2a4-1182246ddfb4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.105 239460 DEBUG nova.compute.manager [req-6717044c-442f-492e-92aa-330005d3e7b3 req-fa139f0d-e028-4b3f-a2a4-1182246ddfb4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] No waiting events found dispatching network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.105 239460 WARNING nova.compute.manager [req-6717044c-442f-492e-92aa-330005d3e7b3 req-fa139f0d-e028-4b3f-a2a4-1182246ddfb4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Received unexpected event network-vif-plugged-923a704a-5e13-4a55-8741-5a8ed5669f0a for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:30:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:30:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/510876364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.359 239460 DEBUG oslo_concurrency.processutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.366 239460 DEBUG nova.compute.provider_tree [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.391 239460 DEBUG nova.scheduler.client.report [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.427 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.453 239460 INFO nova.scheduler.client.report [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Deleted allocations for instance a25c53a6-69cb-4591-96a0-ba339283350e#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.529 239460 DEBUG oslo_concurrency.lockutils [None req-3bce4975-db9b-4ead-9736-957e9dfbcde7 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "a25c53a6-69cb-4591-96a0-ba339283350e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:30:56 np0005601226 nova_compute[239456]: 2026-01-29 17:30:56.633 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:30:57 np0005601226 podman[267742]: 2026-01-29 17:30:57.872355511 +0000 UTC m=+0.048170088 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 29 12:30:57 np0005601226 podman[267743]: 2026-01-29 17:30:57.893742516 +0000 UTC m=+0.066495691 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 29 12:30:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 44 KiB/s wr, 6 op/s
Jan 29 12:30:58 np0005601226 nova_compute[239456]: 2026-01-29 17:30:58.755 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:30:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 48 KiB/s wr, 17 op/s
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.214 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.215 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.231 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.302 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.302 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.310 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.310 239460 INFO nova.compute.claims [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.453 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:31:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239181500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:31:00 np0005601226 nova_compute[239456]: 2026-01-29 17:31:00.992 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.000 239460 DEBUG nova.compute.provider_tree [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.025 239460 DEBUG nova.scheduler.client.report [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.064 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.065 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.125 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.126 239460 DEBUG nova.network.neutron [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.148 239460 INFO nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.170 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.221 239460 INFO nova.virt.block_device [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Booting with volume 374ea712-ba05-4bee-9c63-7609fdf31eb9 at /dev/vda#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.363 239460 DEBUG os_brick.utils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.366 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.379 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.380 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6251ba-8b29-442c-858d-81781a236bc2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.382 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.391 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.392 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bd0c96-e342-4749-ac5b-3435fe576c9d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.393 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.401 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.402 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[b52a2574-ac6e-4176-b82e-a847a6ff16da]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.403 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[2f702f38-68fa-43bb-9afa-fde254752478]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.404 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.421 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "nvme version" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.424 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.424 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.425 239460 DEBUG os_brick.initiator.connectors.lightos [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.425 239460 DEBUG os_brick.utils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.426 239460 DEBUG nova.virt.block_device [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updating existing volume attachment record: 9ec2a172-1224-49ef-9cd2-3f6bb72465e7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.468 239460 DEBUG nova.policy [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3901089a059c4bdb8d0497398873d2f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '420f46ae230d4c529afe366a1b780921', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:31:01 np0005601226 nova_compute[239456]: 2026-01-29 17:31:01.635 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 38 KiB/s wr, 16 op/s
Jan 29 12:31:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:31:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3057269110' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.198 239460 DEBUG nova.network.neutron [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Successfully created port: 7c983110-cfa8-4df3-ac67-f5a430abcfc0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:31:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.477 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.478 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.479 239460 INFO nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Creating image(s)#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.480 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.480 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Ensure instance console log exists: /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.481 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.481 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.481 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.880 239460 DEBUG nova.network.neutron [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Successfully updated port: 7c983110-cfa8-4df3-ac67-f5a430abcfc0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.898 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.899 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquired lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.899 239460 DEBUG nova.network.neutron [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.955 239460 DEBUG nova.compute.manager [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-changed-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.956 239460 DEBUG nova.compute.manager [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Refreshing instance network info cache due to event network-changed-7c983110-cfa8-4df3-ac67-f5a430abcfc0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:31:02 np0005601226 nova_compute[239456]: 2026-01-29 17:31:02.956 239460 DEBUG oslo_concurrency.lockutils [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:03 np0005601226 nova_compute[239456]: 2026-01-29 17:31:03.027 239460 DEBUG nova.network.neutron [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:31:03 np0005601226 nova_compute[239456]: 2026-01-29 17:31:03.757 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 24 KiB/s wr, 16 op/s
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:31:04 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.940 239460 DEBUG nova.network.neutron [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updating instance_info_cache with network_info: [{"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.962 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Releasing lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.962 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Instance network_info: |[{"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.962 239460 DEBUG oslo_concurrency.lockutils [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.963 239460 DEBUG nova.network.neutron [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Refreshing network info cache for port 7c983110-cfa8-4df3-ac67-f5a430abcfc0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.965 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Start _get_guest_xml network_info=[{"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '9ec2a172-1224-49ef-9cd2-3f6bb72465e7', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-374ea712-ba05-4bee-9c63-7609fdf31eb9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '374ea712-ba05-4bee-9c63-7609fdf31eb9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '58d0f64a-66be-4f3d-ba39-68b90ddf8c4f', 'attached_at': '', 'detached_at': '', 'volume_id': '374ea712-ba05-4bee-9c63-7609fdf31eb9', 'serial': '374ea712-ba05-4bee-9c63-7609fdf31eb9'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.969 239460 WARNING nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.974 239460 DEBUG nova.virt.libvirt.host [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.975 239460 DEBUG nova.virt.libvirt.host [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.978 239460 DEBUG nova.virt.libvirt.host [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.978 239460 DEBUG nova.virt.libvirt.host [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.979 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.979 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.979 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.980 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.980 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.980 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.980 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.981 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.981 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.981 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.982 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:31:04 np0005601226 nova_compute[239456]: 2026-01-29 17:31:04.982 239460 DEBUG nova.virt.hardware [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.004 239460 DEBUG nova.storage.rbd_utils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.007 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:05 np0005601226 podman[268069]: 2026-01-29 17:31:05.214469183 +0000 UTC m=+0.041347523 container create 6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_fermi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 12:31:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:31:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:05 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:31:05 np0005601226 systemd[1]: Started libpod-conmon-6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e.scope.
Jan 29 12:31:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:05 np0005601226 podman[268069]: 2026-01-29 17:31:05.283261266 +0000 UTC m=+0.110139666 container init 6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:31:05 np0005601226 podman[268069]: 2026-01-29 17:31:05.199016637 +0000 UTC m=+0.025894997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:31:05 np0005601226 podman[268069]: 2026-01-29 17:31:05.289815002 +0000 UTC m=+0.116693342 container start 6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:31:05 np0005601226 hardcore_fermi[268085]: 167 167
Jan 29 12:31:05 np0005601226 systemd[1]: libpod-6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e.scope: Deactivated successfully.
Jan 29 12:31:05 np0005601226 conmon[268085]: conmon 6420dc40c52d80fe2e54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e.scope/container/memory.events
Jan 29 12:31:05 np0005601226 podman[268069]: 2026-01-29 17:31:05.293701717 +0000 UTC m=+0.120580057 container attach 6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:31:05 np0005601226 podman[268069]: 2026-01-29 17:31:05.294324353 +0000 UTC m=+0.121202693 container died 6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_fermi, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:31:05 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e3688fda50beacc985b3d76d07945483e25c08ad735ff9efc33086b1959e2fe7-merged.mount: Deactivated successfully.
Jan 29 12:31:05 np0005601226 podman[268069]: 2026-01-29 17:31:05.334138605 +0000 UTC m=+0.161016945 container remove 6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_fermi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:31:05 np0005601226 systemd[1]: libpod-conmon-6420dc40c52d80fe2e54eb192e23139ad262331cdd35118218dbaf701d046a5e.scope: Deactivated successfully.
Jan 29 12:31:05 np0005601226 podman[268108]: 2026-01-29 17:31:05.531866778 +0000 UTC m=+0.087504226 container create 82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_volhard, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:31:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:31:05 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3160528792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.554 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:05 np0005601226 podman[268108]: 2026-01-29 17:31:05.469985082 +0000 UTC m=+0.025622530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:31:05 np0005601226 systemd[1]: Started libpod-conmon-82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868.scope.
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.576 239460 DEBUG nova.virt.libvirt.vif [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:30:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-240445960',display_name='tempest-TestVolumeBootPattern-server-240445960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-240445960',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-79907se8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:31:01Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=58d0f64a-66be-4f3d-ba39-68b90ddf8c4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.577 239460 DEBUG nova.network.os_vif_util [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.577 239460 DEBUG nova.network.os_vif_util [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:60:e3,bridge_name='br-int',has_traffic_filtering=True,id=7c983110-cfa8-4df3-ac67-f5a430abcfc0,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c983110-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:31:05 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.578 239460 DEBUG nova.objects.instance [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:31:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17f8e0fc43cf0a3e8d78c6846dbcb6970d17d08b93998e5b369b67cb84e9037/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17f8e0fc43cf0a3e8d78c6846dbcb6970d17d08b93998e5b369b67cb84e9037/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17f8e0fc43cf0a3e8d78c6846dbcb6970d17d08b93998e5b369b67cb84e9037/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17f8e0fc43cf0a3e8d78c6846dbcb6970d17d08b93998e5b369b67cb84e9037/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:05 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17f8e0fc43cf0a3e8d78c6846dbcb6970d17d08b93998e5b369b67cb84e9037/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.589 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <uuid>58d0f64a-66be-4f3d-ba39-68b90ddf8c4f</uuid>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <name>instance-00000016</name>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBootPattern-server-240445960</nova:name>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:31:04</nova:creationTime>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:user uuid="3901089a059c4bdb8d0497398873d2f1">tempest-TestVolumeBootPattern-1871389491-project-member</nova:user>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:project uuid="420f46ae230d4c529afe366a1b780921">tempest-TestVolumeBootPattern-1871389491</nova:project>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <nova:port uuid="7c983110-cfa8-4df3-ac67-f5a430abcfc0">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <entry name="serial">58d0f64a-66be-4f3d-ba39-68b90ddf8c4f</entry>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <entry name="uuid">58d0f64a-66be-4f3d-ba39-68b90ddf8c4f</entry>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_disk.config">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-374ea712-ba05-4bee-9c63-7609fdf31eb9">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <serial>374ea712-ba05-4bee-9c63-7609fdf31eb9</serial>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:6d:60:e3"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <target dev="tap7c983110-cf"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/console.log" append="off"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:31:05 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:31:05 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:31:05 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:31:05 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.591 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Preparing to wait for external event network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.591 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.592 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.592 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.594 239460 DEBUG nova.virt.libvirt.vif [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:30:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-240445960',display_name='tempest-TestVolumeBootPattern-server-240445960',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-240445960',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-79907se8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:31:01Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=58d0f64a-66be-4f3d-ba39-68b90ddf8c4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.594 239460 DEBUG nova.network.os_vif_util [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.595 239460 DEBUG nova.network.os_vif_util [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:60:e3,bridge_name='br-int',has_traffic_filtering=True,id=7c983110-cfa8-4df3-ac67-f5a430abcfc0,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c983110-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.595 239460 DEBUG os_vif [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:60:e3,bridge_name='br-int',has_traffic_filtering=True,id=7c983110-cfa8-4df3-ac67-f5a430abcfc0,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c983110-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.597 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.597 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.598 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:31:05 np0005601226 podman[268108]: 2026-01-29 17:31:05.600708521 +0000 UTC m=+0.156346019 container init 82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_volhard, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.603 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.604 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c983110-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.604 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c983110-cf, col_values=(('external_ids', {'iface-id': '7c983110-cfa8-4df3-ac67-f5a430abcfc0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:60:e3', 'vm-uuid': '58d0f64a-66be-4f3d-ba39-68b90ddf8c4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:05 np0005601226 podman[268108]: 2026-01-29 17:31:05.605728237 +0000 UTC m=+0.161365695 container start 82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 29 12:31:05 np0005601226 NetworkManager[49020]: <info>  [1769707865.6071] manager: (tap7c983110-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.606 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:05 np0005601226 podman[268108]: 2026-01-29 17:31:05.609880779 +0000 UTC m=+0.165518287 container attach 82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.610 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.613 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.614 239460 INFO os_vif [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:60:e3,bridge_name='br-int',has_traffic_filtering=True,id=7c983110-cfa8-4df3-ac67-f5a430abcfc0,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c983110-cf')#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.664 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.665 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.665 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No VIF found with MAC fa:16:3e:6d:60:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.666 239460 INFO nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Using config drive#033[00m
Jan 29 12:31:05 np0005601226 nova_compute[239456]: 2026-01-29 17:31:05.691 239460 DEBUG nova.storage.rbd_utils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 24 KiB/s wr, 15 op/s
Jan 29 12:31:06 np0005601226 nervous_volhard[268126]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:31:06 np0005601226 nervous_volhard[268126]: --> All data devices are unavailable
Jan 29 12:31:06 np0005601226 systemd[1]: libpod-82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868.scope: Deactivated successfully.
Jan 29 12:31:06 np0005601226 podman[268108]: 2026-01-29 17:31:06.060684075 +0000 UTC m=+0.616321523 container died 82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 12:31:06 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f17f8e0fc43cf0a3e8d78c6846dbcb6970d17d08b93998e5b369b67cb84e9037-merged.mount: Deactivated successfully.
Jan 29 12:31:06 np0005601226 podman[268108]: 2026-01-29 17:31:06.128869131 +0000 UTC m=+0.684506609 container remove 82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:31:06 np0005601226 systemd[1]: libpod-conmon-82ce2173f819caca9d78f153947f2106839669d80e05fc29a58dafa321ecd868.scope: Deactivated successfully.
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.161 239460 INFO nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Creating config drive at /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/disk.config#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.167 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ch36s5u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.298 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ch36s5u" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.339 239460 DEBUG nova.storage.rbd_utils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.345 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/disk.config 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.638 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:06 np0005601226 podman[268279]: 2026-01-29 17:31:06.66414684 +0000 UTC m=+0.069120052 container create d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.719 239460 DEBUG oslo_concurrency.processutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/disk.config 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.720 239460 INFO nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Deleting local config drive /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f/disk.config because it was imported into RBD.#033[00m
Jan 29 12:31:06 np0005601226 podman[268279]: 2026-01-29 17:31:06.627260637 +0000 UTC m=+0.032233859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:31:06 np0005601226 systemd[1]: Started libpod-conmon-d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1.scope.
Jan 29 12:31:06 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:06 np0005601226 podman[268279]: 2026-01-29 17:31:06.752605181 +0000 UTC m=+0.157578383 container init d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:31:06 np0005601226 podman[268279]: 2026-01-29 17:31:06.758094869 +0000 UTC m=+0.163068071 container start d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 29 12:31:06 np0005601226 kernel: tap7c983110-cf: entered promiscuous mode
Jan 29 12:31:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:06Z|00202|binding|INFO|Claiming lport 7c983110-cfa8-4df3-ac67-f5a430abcfc0 for this chassis.
Jan 29 12:31:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:06Z|00203|binding|INFO|7c983110-cfa8-4df3-ac67-f5a430abcfc0: Claiming fa:16:3e:6d:60:e3 10.100.0.11
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.763 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:06 np0005601226 conmon[268299]: conmon d14b144871cccf6fda9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1.scope/container/memory.events
Jan 29 12:31:06 np0005601226 NetworkManager[49020]: <info>  [1769707866.7644] manager: (tap7c983110-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Jan 29 12:31:06 np0005601226 systemd[1]: libpod-d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1.scope: Deactivated successfully.
Jan 29 12:31:06 np0005601226 reverent_satoshi[268299]: 167 167
Jan 29 12:31:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:06Z|00204|binding|INFO|Setting lport 7c983110-cfa8-4df3-ac67-f5a430abcfc0 ovn-installed in OVS
Jan 29 12:31:06 np0005601226 podman[268279]: 2026-01-29 17:31:06.768516979 +0000 UTC m=+0.173490181 container attach d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 29 12:31:06 np0005601226 podman[268279]: 2026-01-29 17:31:06.769496806 +0000 UTC m=+0.174470048 container died d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:31:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:06Z|00205|binding|INFO|Setting lport 7c983110-cfa8-4df3-ac67-f5a430abcfc0 up in Southbound
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.779 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:60:e3 10.100.0.11'], port_security=['fa:16:3e:6d:60:e3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58d0f64a-66be-4f3d-ba39-68b90ddf8c4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cfba344-fbfc-404d-872d-d297b528124f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=7c983110-cfa8-4df3-ac67-f5a430abcfc0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.781 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 7c983110-cfa8-4df3-ac67-f5a430abcfc0 in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b bound to our chassis#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.783 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.791 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[88f6756b-ef0e-4323-b077-4422f11a8ad2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.791 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c08c304-21 in ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.793 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c08c304-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.793 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c4828b2e-8006-47bb-b2b5-75f1f163e7a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.794 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[081be8fc-a4ca-446d-974c-ef4202259138]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.802 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0cfd4a-ad90-40f9-b542-bc22f974ea43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.816 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[93acf86e-be91-4aba-9871-328240700e0e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.815 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:06 np0005601226 systemd-udevd[268329]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:31:06 np0005601226 systemd-machined[207561]: New machine qemu-22-instance-00000016.
Jan 29 12:31:06 np0005601226 NetworkManager[49020]: <info>  [1769707866.8336] device (tap7c983110-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:31:06 np0005601226 NetworkManager[49020]: <info>  [1769707866.8343] device (tap7c983110-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:31:06 np0005601226 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Jan 29 12:31:06 np0005601226 systemd[1]: var-lib-containers-storage-overlay-dbb19949fb481c1d615f5d215b6b01de42248ee6839bc2c99ee596d0bea526d9-merged.mount: Deactivated successfully.
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.846 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f17f6b-5580-4a60-9e6f-a52cbcea83e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.851 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d4688acc-31ad-4bb0-82aa-f504d6098334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 NetworkManager[49020]: <info>  [1769707866.8527] manager: (tap3c08c304-20): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Jan 29 12:31:06 np0005601226 systemd-udevd[268335]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.871 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[8591029f-ec0b-4a95-ad83-b04c36c5d224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.873 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[f56b52d3-654b-46c5-a634-880a2ebcf06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 podman[268279]: 2026-01-29 17:31:06.874356209 +0000 UTC m=+0.279329411 container remove d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:31:06 np0005601226 systemd[1]: libpod-conmon-d14b144871cccf6fda9bca15d3a3432b983fc3335617e85eee69ae98f84a26b1.scope: Deactivated successfully.
Jan 29 12:31:06 np0005601226 NetworkManager[49020]: <info>  [1769707866.8907] device (tap3c08c304-20): carrier: link connected
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.897 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[b74a4d3f-5086-4a34-a240-e0be851b5dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.910 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0afaf4a8-e677-45b0-a76a-ea5b25ff6702]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522039, 'reachable_time': 26044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268363, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.921 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[743107e2-b26c-470f-98f9-c8a07a1adba9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:51ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522039, 'tstamp': 522039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268364, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.933 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcfe938-9ba1-426b-889f-dee6051bf548]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522039, 'reachable_time': 26044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268365, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.953 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4babd4aa-57c0-4f39-ad4a-7faeef86997e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.994 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[6327615e-5a1d-4fe5-bf66-744f95a31f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.996 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.996 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:31:06 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:06.997 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:06 np0005601226 nova_compute[239456]: 2026-01-29 17:31:06.998 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:06 np0005601226 kernel: tap3c08c304-20: entered promiscuous mode
Jan 29 12:31:07 np0005601226 NetworkManager[49020]: <info>  [1769707867.0005] manager: (tap3c08c304-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:07.005 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:07Z|00206|binding|INFO|Releasing lport 4f9b16f1-6965-486d-bc02-ab1e4969963e from this chassis (sb_readonly=0)
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.006 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.008 239460 DEBUG nova.network.neutron [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updated VIF entry in instance network info cache for port 7c983110-cfa8-4df3-ac67-f5a430abcfc0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.008 239460 DEBUG nova.network.neutron [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updating instance_info_cache with network_info: [{"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.011 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:07.012 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:07.013 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f91d4840-89d6-4f90-822b-0e948b144ab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:07.014 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/3c08c304-2b32-4b44-ac2b-279bb8b2403b.pid.haproxy
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 3c08c304-2b32-4b44-ac2b-279bb8b2403b
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:31:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:07.014 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'env', 'PROCESS_TAG=haproxy-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c08c304-2b32-4b44-ac2b-279bb8b2403b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.022 239460 DEBUG oslo_concurrency.lockutils [req-908d2ea8-ceb9-4f96-9afa-99f2f6929dcf req-eaf01027-d422-40bd-93fd-32cfa413afa6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:07 np0005601226 podman[268374]: 2026-01-29 17:31:07.085285098 +0000 UTC m=+0.117377601 container create c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 12:31:07 np0005601226 podman[268374]: 2026-01-29 17:31:06.991480783 +0000 UTC m=+0.023573296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.150 239460 DEBUG nova.compute.manager [req-ada418c6-43fe-4604-9a9d-0e680ef9d24b req-f0bb496a-4fb1-44da-b980-20799481b750 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.150 239460 DEBUG oslo_concurrency.lockutils [req-ada418c6-43fe-4604-9a9d-0e680ef9d24b req-f0bb496a-4fb1-44da-b980-20799481b750 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.150 239460 DEBUG oslo_concurrency.lockutils [req-ada418c6-43fe-4604-9a9d-0e680ef9d24b req-f0bb496a-4fb1-44da-b980-20799481b750 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.151 239460 DEBUG oslo_concurrency.lockutils [req-ada418c6-43fe-4604-9a9d-0e680ef9d24b req-f0bb496a-4fb1-44da-b980-20799481b750 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.151 239460 DEBUG nova.compute.manager [req-ada418c6-43fe-4604-9a9d-0e680ef9d24b req-f0bb496a-4fb1-44da-b980-20799481b750 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Processing event network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:31:07 np0005601226 systemd[1]: Started libpod-conmon-c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31.scope.
Jan 29 12:31:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149de5719e8540cc9f4c9f929aec30851d1145afa407055018839d8eb133e806/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149de5719e8540cc9f4c9f929aec30851d1145afa407055018839d8eb133e806/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149de5719e8540cc9f4c9f929aec30851d1145afa407055018839d8eb133e806/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149de5719e8540cc9f4c9f929aec30851d1145afa407055018839d8eb133e806/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:07 np0005601226 podman[268374]: 2026-01-29 17:31:07.20241063 +0000 UTC m=+0.234503133 container init c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 29 12:31:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:07 np0005601226 podman[268374]: 2026-01-29 17:31:07.209033849 +0000 UTC m=+0.241126332 container start c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:31:07 np0005601226 podman[268374]: 2026-01-29 17:31:07.212287046 +0000 UTC m=+0.244379549 container attach c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.255 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707867.2546787, 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.255 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] VM Started (Lifecycle Event)#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.257 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.260 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.267 239460 INFO nova.virt.libvirt.driver [-] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Instance spawned successfully.#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.267 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.298 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.304 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.308 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.308 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.309 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.309 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.309 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.309 239460 DEBUG nova.virt.libvirt.driver [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.359 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.359 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707867.2548594, 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.359 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.381 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.384 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707867.2600002, 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.384 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:31:07 np0005601226 podman[268465]: 2026-01-29 17:31:07.389524348 +0000 UTC m=+0.063458959 container create fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.390 239460 INFO nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Took 4.91 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.390 239460 DEBUG nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.401 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.408 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:31:07 np0005601226 systemd[1]: Started libpod-conmon-fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64.scope.
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.440 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:31:07 np0005601226 podman[268465]: 2026-01-29 17:31:07.348157054 +0000 UTC m=+0.022091685 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:31:07 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.458 239460 INFO nova.compute.manager [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Took 7.17 seconds to build instance.#033[00m
Jan 29 12:31:07 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6907652b9ab969a6cb6d496f0ee63fa4d1810df89a5d718a6364c4588dd8f8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:07 np0005601226 nova_compute[239456]: 2026-01-29 17:31:07.472 239460 DEBUG oslo_concurrency.lockutils [None req-b7b3dea6-9063-4ac9-b0d2-46669d2eee2e 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]: {
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:    "0": [
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:        {
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "devices": [
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "/dev/loop3"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            ],
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_name": "ceph_lv0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_size": "21470642176",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "name": "ceph_lv0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "tags": {
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cluster_name": "ceph",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.crush_device_class": "",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.encrypted": "0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.objectstore": "bluestore",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osd_id": "0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.type": "block",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.vdo": "0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.with_tpm": "0"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            },
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "type": "block",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "vg_name": "ceph_vg0"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:        }
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:    ],
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:    "1": [
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:        {
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "devices": [
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "/dev/loop4"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            ],
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_name": "ceph_lv1",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_size": "21470642176",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "name": "ceph_lv1",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "tags": {
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cluster_name": "ceph",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.crush_device_class": "",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.encrypted": "0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.objectstore": "bluestore",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osd_id": "1",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.type": "block",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.vdo": "0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.with_tpm": "0"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            },
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "type": "block",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "vg_name": "ceph_vg1"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:        }
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:    ],
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:    "2": [
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:        {
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "devices": [
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "/dev/loop5"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            ],
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_name": "ceph_lv2",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_size": "21470642176",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "name": "ceph_lv2",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "tags": {
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.cluster_name": "ceph",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.crush_device_class": "",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.encrypted": "0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.objectstore": "bluestore",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osd_id": "2",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.type": "block",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.vdo": "0",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:                "ceph.with_tpm": "0"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            },
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "type": "block",
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:            "vg_name": "ceph_vg2"
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:        }
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]:    ]
Jan 29 12:31:07 np0005601226 bold_wilbur[268428]: }
Jan 29 12:31:07 np0005601226 systemd[1]: libpod-c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31.scope: Deactivated successfully.
Jan 29 12:31:07 np0005601226 podman[268465]: 2026-01-29 17:31:07.520495763 +0000 UTC m=+0.194430384 container init fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:31:07 np0005601226 podman[268374]: 2026-01-29 17:31:07.529505056 +0000 UTC m=+0.561597539 container died c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:31:07 np0005601226 podman[268465]: 2026-01-29 17:31:07.534532731 +0000 UTC m=+0.208467342 container start fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:31:07 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[268484]: [NOTICE]   (268495) : New worker (268503) forked
Jan 29 12:31:07 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[268484]: [NOTICE]   (268495) : Loading success.
Jan 29 12:31:07 np0005601226 podman[268374]: 2026-01-29 17:31:07.574903448 +0000 UTC m=+0.606995931 container remove c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:31:07 np0005601226 systemd[1]: libpod-conmon-c8131b34b558e7178b8577feb0013f82db54b5d603a940a1ff629eabedb75d31.scope: Deactivated successfully.
Jan 29 12:31:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-149de5719e8540cc9f4c9f929aec30851d1145afa407055018839d8eb133e806-merged.mount: Deactivated successfully.
Jan 29 12:31:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 4.8 KiB/s wr, 10 op/s
Jan 29 12:31:08 np0005601226 podman[268575]: 2026-01-29 17:31:08.080386126 +0000 UTC m=+0.035654350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:31:08 np0005601226 podman[268575]: 2026-01-29 17:31:08.202833243 +0000 UTC m=+0.158101447 container create 695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:31:08 np0005601226 systemd[1]: Started libpod-conmon-695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d.scope.
Jan 29 12:31:08 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:08 np0005601226 podman[268575]: 2026-01-29 17:31:08.469733517 +0000 UTC m=+0.425001751 container init 695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:31:08 np0005601226 podman[268575]: 2026-01-29 17:31:08.482508622 +0000 UTC m=+0.437776856 container start 695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 12:31:08 np0005601226 competent_montalcini[268591]: 167 167
Jan 29 12:31:08 np0005601226 systemd[1]: libpod-695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d.scope: Deactivated successfully.
Jan 29 12:31:08 np0005601226 podman[268575]: 2026-01-29 17:31:08.491894404 +0000 UTC m=+0.447162628 container attach 695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_montalcini, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 12:31:08 np0005601226 podman[268575]: 2026-01-29 17:31:08.492233123 +0000 UTC m=+0.447501347 container died 695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_montalcini, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:31:08 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c2ad248433ae77c50f768c48854aa2f55fbc14f2daf316a1bad6b16dd66a92f3-merged.mount: Deactivated successfully.
Jan 29 12:31:08 np0005601226 podman[268575]: 2026-01-29 17:31:08.700746036 +0000 UTC m=+0.656014280 container remove 695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 29 12:31:08 np0005601226 nova_compute[239456]: 2026-01-29 17:31:08.733 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707853.7317905, a25c53a6-69cb-4591-96a0-ba339283350e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:08 np0005601226 nova_compute[239456]: 2026-01-29 17:31:08.733 239460 INFO nova.compute.manager [-] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:31:08 np0005601226 systemd[1]: libpod-conmon-695f103f8f1d1a76aceb3b43e971728f6cfa784dde2cfe3e228cd833d269686d.scope: Deactivated successfully.
Jan 29 12:31:08 np0005601226 nova_compute[239456]: 2026-01-29 17:31:08.752 239460 DEBUG nova.compute.manager [None req-8e33253a-e6a5-4275-a292-9f3eda7ed178 - - - - - -] [instance: a25c53a6-69cb-4591-96a0-ba339283350e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:08 np0005601226 podman[268615]: 2026-01-29 17:31:08.957119638 +0000 UTC m=+0.073619452 container create f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:31:09 np0005601226 podman[268615]: 2026-01-29 17:31:08.912683312 +0000 UTC m=+0.029183126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:31:09 np0005601226 systemd[1]: Started libpod-conmon-f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d.scope.
Jan 29 12:31:09 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075acd21dbad1245889b1fea224967f369f39019bc78e4b0ed79354dce66b741/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075acd21dbad1245889b1fea224967f369f39019bc78e4b0ed79354dce66b741/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075acd21dbad1245889b1fea224967f369f39019bc78e4b0ed79354dce66b741/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:09 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/075acd21dbad1245889b1fea224967f369f39019bc78e4b0ed79354dce66b741/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:09 np0005601226 podman[268615]: 2026-01-29 17:31:09.132310175 +0000 UTC m=+0.248810009 container init f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mendel, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:31:09 np0005601226 podman[268615]: 2026-01-29 17:31:09.140236808 +0000 UTC m=+0.256736632 container start f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mendel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 12:31:09 np0005601226 podman[268615]: 2026-01-29 17:31:09.161774488 +0000 UTC m=+0.278274302 container attach f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.215 239460 DEBUG nova.compute.manager [req-1faaf29c-15df-4444-8ee6-919d9335e046 req-c6bb68e8-ba81-470d-ad21-3a401a802aa3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.216 239460 DEBUG oslo_concurrency.lockutils [req-1faaf29c-15df-4444-8ee6-919d9335e046 req-c6bb68e8-ba81-470d-ad21-3a401a802aa3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.216 239460 DEBUG oslo_concurrency.lockutils [req-1faaf29c-15df-4444-8ee6-919d9335e046 req-c6bb68e8-ba81-470d-ad21-3a401a802aa3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.217 239460 DEBUG oslo_concurrency.lockutils [req-1faaf29c-15df-4444-8ee6-919d9335e046 req-c6bb68e8-ba81-470d-ad21-3a401a802aa3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.217 239460 DEBUG nova.compute.manager [req-1faaf29c-15df-4444-8ee6-919d9335e046 req-c6bb68e8-ba81-470d-ad21-3a401a802aa3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] No waiting events found dispatching network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.217 239460 WARNING nova.compute.manager [req-1faaf29c-15df-4444-8ee6-919d9335e046 req-c6bb68e8-ba81-470d-ad21-3a401a802aa3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received unexpected event network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.260 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.260 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.261 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.261 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.261 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.262 239460 INFO nova.compute.manager [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Terminating instance#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.263 239460 DEBUG nova.compute.manager [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:31:09 np0005601226 kernel: tapc8b9d9fc-19 (unregistering): left promiscuous mode
Jan 29 12:31:09 np0005601226 NetworkManager[49020]: <info>  [1769707869.6843] device (tapc8b9d9fc-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.689 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.691 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:09 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:09Z|00207|binding|INFO|Releasing lport c8b9d9fc-1915-4db3-8869-f69770c88894 from this chassis (sb_readonly=0)
Jan 29 12:31:09 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:09Z|00208|binding|INFO|Setting lport c8b9d9fc-1915-4db3-8869-f69770c88894 down in Southbound
Jan 29 12:31:09 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:09Z|00209|binding|INFO|Removing iface tapc8b9d9fc-19 ovn-installed in OVS
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.697 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:09.698 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ae:b3 10.100.0.7'], port_security=['fa:16:3e:30:ae:b3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4c4d76ac-3711-4858-90a1-7e43dc5ff7e4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45c1bbdb-777c-4906-ac59-7f4e97f55f2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=c8b9d9fc-1915-4db3-8869-f69770c88894) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:31:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:09.700 155625 INFO neutron.agent.ovn.metadata.agent [-] Port c8b9d9fc-1915-4db3-8869-f69770c88894 in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 unbound from our chassis#033[00m
Jan 29 12:31:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:09.702 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25cf1715-f178-4f65-be7c-cf203c28f072, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:31:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:09.704 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[062b0644-365f-4426-afda-0c735fe87735]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:09 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:09.705 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace which is not needed anymore#033[00m
Jan 29 12:31:09 np0005601226 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Jan 29 12:31:09 np0005601226 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 14.809s CPU time.
Jan 29 12:31:09 np0005601226 systemd-machined[207561]: Machine qemu-20-instance-00000014 terminated.
Jan 29 12:31:09 np0005601226 lvm[268744]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:31:09 np0005601226 lvm[268744]: VG ceph_vg0 finished
Jan 29 12:31:09 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [NOTICE]   (267361) : haproxy version is 2.8.14-c23fe91
Jan 29 12:31:09 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [NOTICE]   (267361) : path to executable is /usr/sbin/haproxy
Jan 29 12:31:09 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [WARNING]  (267361) : Exiting Master process...
Jan 29 12:31:09 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [WARNING]  (267361) : Exiting Master process...
Jan 29 12:31:09 np0005601226 lvm[268746]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:31:09 np0005601226 lvm[268746]: VG ceph_vg1 finished
Jan 29 12:31:09 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [ALERT]    (267361) : Current worker (267363) exited with code 143 (Terminated)
Jan 29 12:31:09 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[267357]: [WARNING]  (267361) : All workers exited. Exiting... (0)
Jan 29 12:31:09 np0005601226 systemd[1]: libpod-6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4.scope: Deactivated successfully.
Jan 29 12:31:09 np0005601226 lvm[268747]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:31:09 np0005601226 lvm[268747]: VG ceph_vg2 finished
Jan 29 12:31:09 np0005601226 podman[268723]: 2026-01-29 17:31:09.897054092 +0000 UTC m=+0.104778392 container died 6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.904 239460 INFO nova.virt.libvirt.driver [-] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Instance destroyed successfully.#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.905 239460 DEBUG nova.objects.instance [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'resources' on Instance uuid 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.922 239460 DEBUG nova.virt.libvirt.vif [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1788232795',display_name='tempest-TransferEncryptedVolumeTest-server-1788232795',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1788232795',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDgD45Dm8MtAL32WS9smIaOmM8jSZyGBgHt0KuuP4tAN+PaFbPD2gY+bvOWoixBRmKRVNeRJWxYw4x1d/JqSF+Q3lf37438lc/Bafac9K9BPV+ZkjGBum9rZonwt+cLWAQ==',key_name='tempest-TransferEncryptedVolumeTest-1822701009',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:30:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-udgyo0iy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:30:31Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=4c4d76ac-3711-4858-90a1-7e43dc5ff7e4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.923 239460 DEBUG nova.network.os_vif_util [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "c8b9d9fc-1915-4db3-8869-f69770c88894", "address": "fa:16:3e:30:ae:b3", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8b9d9fc-19", "ovs_interfaceid": "c8b9d9fc-1915-4db3-8869-f69770c88894", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.924 239460 DEBUG nova.network.os_vif_util [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:ae:b3,bridge_name='br-int',has_traffic_filtering=True,id=c8b9d9fc-1915-4db3-8869-f69770c88894,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8b9d9fc-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.924 239460 DEBUG os_vif [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ae:b3,bridge_name='br-int',has_traffic_filtering=True,id=c8b9d9fc-1915-4db3-8869-f69770c88894,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8b9d9fc-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.926 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.926 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8b9d9fc-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.929 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:09 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 17 KiB/s wr, 60 op/s
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.930 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:31:09 np0005601226 nova_compute[239456]: 2026-01-29 17:31:09.935 239460 INFO os_vif [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ae:b3,bridge_name='br-int',has_traffic_filtering=True,id=c8b9d9fc-1915-4db3-8869-f69770c88894,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8b9d9fc-19')#033[00m
Jan 29 12:31:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4-userdata-shm.mount: Deactivated successfully.
Jan 29 12:31:09 np0005601226 heuristic_mendel[268632]: {}
Jan 29 12:31:09 np0005601226 systemd[1]: var-lib-containers-storage-overlay-254376857c4901c5c93c381496cb6f6d4c387bc22840b54a8d323f02fae9b517-merged.mount: Deactivated successfully.
Jan 29 12:31:09 np0005601226 podman[268723]: 2026-01-29 17:31:09.992016909 +0000 UTC m=+0.199741209 container cleanup 6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:31:10 np0005601226 systemd[1]: libpod-f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d.scope: Deactivated successfully.
Jan 29 12:31:10 np0005601226 systemd[1]: libpod-f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d.scope: Consumed 1.275s CPU time.
Jan 29 12:31:10 np0005601226 conmon[268632]: conmon f83d0faf68456d08d8a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d.scope/container/memory.events
Jan 29 12:31:10 np0005601226 systemd[1]: libpod-conmon-6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4.scope: Deactivated successfully.
Jan 29 12:31:10 np0005601226 podman[268615]: 2026-01-29 17:31:10.002695316 +0000 UTC m=+1.119195120 container died f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 12:31:10 np0005601226 systemd[1]: var-lib-containers-storage-overlay-075acd21dbad1245889b1fea224967f369f39019bc78e4b0ed79354dce66b741-merged.mount: Deactivated successfully.
Jan 29 12:31:10 np0005601226 podman[268615]: 2026-01-29 17:31:10.091549138 +0000 UTC m=+1.208048942 container remove f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_mendel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:31:10 np0005601226 systemd[1]: libpod-conmon-f83d0faf68456d08d8a2f1381c61012303547921f7cc0402378b151a8e25e64d.scope: Deactivated successfully.
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.116 239460 DEBUG nova.compute.manager [req-76fea41b-23f3-4e0d-b37f-310ea7afbaef req-2a8749ca-6d85-4b2a-b0fe-0e97d27d79a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-vif-unplugged-c8b9d9fc-1915-4db3-8869-f69770c88894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.116 239460 DEBUG oslo_concurrency.lockutils [req-76fea41b-23f3-4e0d-b37f-310ea7afbaef req-2a8749ca-6d85-4b2a-b0fe-0e97d27d79a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.116 239460 DEBUG oslo_concurrency.lockutils [req-76fea41b-23f3-4e0d-b37f-310ea7afbaef req-2a8749ca-6d85-4b2a-b0fe-0e97d27d79a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.117 239460 DEBUG oslo_concurrency.lockutils [req-76fea41b-23f3-4e0d-b37f-310ea7afbaef req-2a8749ca-6d85-4b2a-b0fe-0e97d27d79a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.117 239460 DEBUG nova.compute.manager [req-76fea41b-23f3-4e0d-b37f-310ea7afbaef req-2a8749ca-6d85-4b2a-b0fe-0e97d27d79a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] No waiting events found dispatching network-vif-unplugged-c8b9d9fc-1915-4db3-8869-f69770c88894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.117 239460 DEBUG nova.compute.manager [req-76fea41b-23f3-4e0d-b37f-310ea7afbaef req-2a8749ca-6d85-4b2a-b0fe-0e97d27d79a9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-vif-unplugged-c8b9d9fc-1915-4db3-8869-f69770c88894 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:31:10 np0005601226 podman[268798]: 2026-01-29 17:31:10.132185602 +0000 UTC m=+0.121827480 container remove 6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.136 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[09b3fbbe-0dd4-47b3-9410-24659c027ba4]: (4, ('Thu Jan 29 05:31:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4)\n6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4\nThu Jan 29 05:31:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4)\n6a0dcc9e27ab69260c27fe18683d952a5fc5470ed856c430ff355de18fc0f6e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.139 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb5d247-ca73-40c9-9444-14ab01148c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.142 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:10 np0005601226 kernel: tap25cf1715-f0: left promiscuous mode
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.144 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.149 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.153 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0a545d5d-af31-4c5c-88c2-2c7918d3498c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.171 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[73197ec9-4cf5-461d-bdea-c646d51e5805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.175 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[06d710b2-e894-436c-a2a1-f0836d952ff3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.193 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bb75bb-3901-4c98-97b3-4623ee094524]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518220, 'reachable_time': 29886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268830, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:10 np0005601226 systemd[1]: run-netns-ovnmeta\x2d25cf1715\x2df178\x2d4f65\x2dbe7c\x2dcf203c28f072.mount: Deactivated successfully.
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.199 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.199 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[33c43758-2115-407e-820c-2d683546a337]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.206 239460 INFO nova.virt.libvirt.driver [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Deleting instance files /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_del#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.209 239460 INFO nova.virt.libvirt.driver [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Deletion of /var/lib/nova/instances/4c4d76ac-3711-4858-90a1-7e43dc5ff7e4_del complete#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.222 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.223 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:31:10 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:10.224 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.260 239460 INFO nova.compute.manager [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.262 239460 DEBUG oslo.service.loopingcall [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.264 239460 DEBUG nova.compute.manager [-] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:31:10 np0005601226 nova_compute[239456]: 2026-01-29 17:31:10.264 239460 DEBUG nova.network.neutron [-] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:31:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:31:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:31:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:31:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:31:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:31:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:31:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.567 239460 DEBUG nova.network.neutron [-] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.587 239460 INFO nova.compute.manager [-] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Took 1.32 seconds to deallocate network for instance.#033[00m
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.639 239460 DEBUG nova.compute.manager [req-682fb446-4c73-4668-a770-2a2c270279f6 req-586755b9-ff25-44c2-b901-bfa541587e43 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-vif-deleted-c8b9d9fc-1915-4db3-8869-f69770c88894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.640 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.801 239460 INFO nova.compute.manager [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Took 0.21 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.874 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.875 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:11 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 88 op/s
Jan 29 12:31:11 np0005601226 nova_compute[239456]: 2026-01-29 17:31:11.951 239460 DEBUG oslo_concurrency.processutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.185 239460 DEBUG nova.compute.manager [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received event network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.187 239460 DEBUG oslo_concurrency.lockutils [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.188 239460 DEBUG oslo_concurrency.lockutils [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.188 239460 DEBUG oslo_concurrency.lockutils [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.188 239460 DEBUG nova.compute.manager [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] No waiting events found dispatching network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.188 239460 WARNING nova.compute.manager [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Received unexpected event network-vif-plugged-c8b9d9fc-1915-4db3-8869-f69770c88894 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.189 239460 DEBUG nova.compute.manager [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-changed-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.189 239460 DEBUG nova.compute.manager [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Refreshing instance network info cache due to event network-changed-7c983110-cfa8-4df3-ac67-f5a430abcfc0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.189 239460 DEBUG oslo_concurrency.lockutils [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.190 239460 DEBUG oslo_concurrency.lockutils [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.190 239460 DEBUG nova.network.neutron [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Refreshing network info cache for port 7c983110-cfa8-4df3-ac67-f5a430abcfc0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:31:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:31:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433157658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.498 239460 DEBUG oslo_concurrency.processutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.504 239460 DEBUG nova.compute.provider_tree [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.522 239460 DEBUG nova.scheduler.client.report [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.544 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.565 239460 INFO nova.scheduler.client.report [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Deleted allocations for instance 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4#033[00m
Jan 29 12:31:12 np0005601226 nova_compute[239456]: 2026-01-29 17:31:12.635 239460 DEBUG oslo_concurrency.lockutils [None req-3f2f116b-ef23-4b42-94d1-2cdd45bee009 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "4c4d76ac-3711-4858-90a1-7e43dc5ff7e4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:13 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:13.227 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:13 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 90 op/s
Jan 29 12:31:14 np0005601226 nova_compute[239456]: 2026-01-29 17:31:14.120 239460 DEBUG nova.network.neutron [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updated VIF entry in instance network info cache for port 7c983110-cfa8-4df3-ac67-f5a430abcfc0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:31:14 np0005601226 nova_compute[239456]: 2026-01-29 17:31:14.120 239460 DEBUG nova.network.neutron [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updating instance_info_cache with network_info: [{"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:14 np0005601226 nova_compute[239456]: 2026-01-29 17:31:14.141 239460 DEBUG oslo_concurrency.lockutils [req-c4293b36-9ae7-4277-9d80-1604a7fdc4ea req-8a213936-9fd4-4eff-8cc8-7afab14859f6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:14 np0005601226 nova_compute[239456]: 2026-01-29 17:31:14.930 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:15 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 29 12:31:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:31:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653803163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:31:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:31:16 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653803163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:31:16 np0005601226 nova_compute[239456]: 2026-01-29 17:31:16.641 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:17 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 350 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 29 12:31:18 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:18Z|00046|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.11
Jan 29 12:31:18 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:18Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:6d:60:e3 10.100.0.11
Jan 29 12:31:19 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 236 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 132 op/s
Jan 29 12:31:19 np0005601226 nova_compute[239456]: 2026-01-29 17:31:19.934 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:21Z|00048|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.11
Jan 29 12:31:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:21Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:6d:60:e3 10.100.0.11
Jan 29 12:31:21 np0005601226 nova_compute[239456]: 2026-01-29 17:31:21.643 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:21 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 167 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 14 KiB/s wr, 107 op/s
Jan 29 12:31:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:23 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:23Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:60:e3 10.100.0.11
Jan 29 12:31:23 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:23Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:60:e3 10.100.0.11
Jan 29 12:31:23 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 167 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 70 op/s
Jan 29 12:31:24 np0005601226 nova_compute[239456]: 2026-01-29 17:31:24.899 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707869.8985667, 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:24 np0005601226 nova_compute[239456]: 2026-01-29 17:31:24.900 239460 INFO nova.compute.manager [-] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:31:24 np0005601226 nova_compute[239456]: 2026-01-29 17:31:24.919 239460 DEBUG nova.compute.manager [None req-98f6f102-20cc-4133-b90f-6840e1bed858 - - - - - -] [instance: 4c4d76ac-3711-4858-90a1-7e43dc5ff7e4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:24 np0005601226 nova_compute[239456]: 2026-01-29 17:31:24.937 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:25 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 169 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 28 KiB/s wr, 74 op/s
Jan 29 12:31:26 np0005601226 nova_compute[239456]: 2026-01-29 17:31:26.695 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:27 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 169 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 28 KiB/s wr, 71 op/s
Jan 29 12:31:28 np0005601226 podman[268874]: 2026-01-29 17:31:28.911640334 +0000 UTC m=+0.066368787 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:31:28 np0005601226 podman[268875]: 2026-01-29 17:31:28.933138303 +0000 UTC m=+0.092641226 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 12:31:29 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 169 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 29 KiB/s wr, 75 op/s
Jan 29 12:31:29 np0005601226 nova_compute[239456]: 2026-01-29 17:31:29.984 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:31:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44122763' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:31:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:31:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/44122763' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:31:31 np0005601226 nova_compute[239456]: 2026-01-29 17:31:31.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:31 np0005601226 nova_compute[239456]: 2026-01-29 17:31:31.603 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:31:31 np0005601226 nova_compute[239456]: 2026-01-29 17:31:31.698 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:31 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 169 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 55 op/s
Jan 29 12:31:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:32 np0005601226 nova_compute[239456]: 2026-01-29 17:31:32.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:33 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 169 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 54 op/s
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.629 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.629 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:34 np0005601226 nova_compute[239456]: 2026-01-29 17:31:34.987 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:31:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/944048394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.139 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.211 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.211 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.327 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.329 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4193MB free_disk=59.98803045228124GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.329 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.329 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.456 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.456 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.456 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.495 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing inventories for resource provider 79259295-532c-4a51-8f50-027529735b0c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.562 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating ProviderTree inventory for provider 79259295-532c-4a51-8f50-027529735b0c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.563 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.587 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing aggregate associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.633 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing trait associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, traits: HW_CPU_X86_SSE4A,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 29 12:31:35 np0005601226 nova_compute[239456]: 2026-01-29 17:31:35.689 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:35 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 169 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 40 KiB/s wr, 71 op/s
Jan 29 12:31:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:31:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285451258' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:31:36 np0005601226 nova_compute[239456]: 2026-01-29 17:31:36.247 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:36 np0005601226 nova_compute[239456]: 2026-01-29 17:31:36.252 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:31:36 np0005601226 nova_compute[239456]: 2026-01-29 17:31:36.268 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:31:36 np0005601226 nova_compute[239456]: 2026-01-29 17:31:36.302 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:31:36 np0005601226 nova_compute[239456]: 2026-01-29 17:31:36.303 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:36 np0005601226 nova_compute[239456]: 2026-01-29 17:31:36.699 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:37 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 169 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 25 KiB/s wr, 64 op/s
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.303 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.304 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.965 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.965 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.966 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:31:38 np0005601226 nova_compute[239456]: 2026-01-29 17:31:38.966 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:31:39 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 169 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 26 KiB/s wr, 64 op/s
Jan 29 12:31:39 np0005601226 nova_compute[239456]: 2026-01-29 17:31:39.990 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:40.292 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:40.292 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:40.293 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:40 np0005601226 nova_compute[239456]: 2026-01-29 17:31:40.330 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updating instance_info_cache with network_info: [{"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:40 np0005601226 nova_compute[239456]: 2026-01-29 17:31:40.345 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:40 np0005601226 nova_compute[239456]: 2026-01-29 17:31:40.345 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:31:40
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'backups', 'images', 'default.rgw.control', 'volumes']
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:31:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:31:41 np0005601226 nova_compute[239456]: 2026-01-29 17:31:41.340 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:41 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:41Z|00210|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Jan 29 12:31:41 np0005601226 nova_compute[239456]: 2026-01-29 17:31:41.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:41 np0005601226 nova_compute[239456]: 2026-01-29 17:31:41.701 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:41 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 169 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 25 KiB/s wr, 60 op/s
Jan 29 12:31:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:43 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 197 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.4 MiB/s wr, 53 op/s
Jan 29 12:31:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Jan 29 12:31:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Jan 29 12:31:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Jan 29 12:31:44 np0005601226 nova_compute[239456]: 2026-01-29 17:31:44.992 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:45 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 283 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 11 MiB/s wr, 71 op/s
Jan 29 12:31:46 np0005601226 nova_compute[239456]: 2026-01-29 17:31:46.737 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:47 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 283 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 11 MiB/s wr, 71 op/s
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.169 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.169 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.188 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.258 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.258 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.385 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.386 239460 INFO nova.compute.claims [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:31:48 np0005601226 nova_compute[239456]: 2026-01-29 17:31:48.518 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:31:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3674297955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.026 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.031 239460 DEBUG nova.compute.provider_tree [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.057 239460 DEBUG nova.scheduler.client.report [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.086 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.087 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.149 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.149 239460 DEBUG nova.network.neutron [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.178 239460 INFO nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.196 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.240 239460 INFO nova.virt.block_device [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Booting with volume 4000fad4-b5f6-4912-bea5-f20dff3f5ac9 at /dev/vda#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.382 239460 DEBUG os_brick.utils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.383 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.395 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.396 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[79b1a6e3-0c39-4e9b-9562-24c2dfa3f006]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.397 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.405 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.405 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd42d49-9a18-4d80-ae62-7041ea95bea3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.406 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.414 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.414 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[a808b049-c4eb-44ad-ba74-df9eeaf45535]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.415 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[bea355d7-4709-4ad8-87e4-e25eaebb2ef4]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.415 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.430 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "nvme version" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.432 239460 DEBUG os_brick.initiator.connectors.lightos [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.432 239460 DEBUG os_brick.initiator.connectors.lightos [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.432 239460 DEBUG os_brick.initiator.connectors.lightos [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.432 239460 DEBUG os_brick.utils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] <== get_connector_properties: return (49ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.433 239460 DEBUG nova.virt.block_device [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Updating existing volume attachment record: 69e8446c-472d-4799-8122-6d5b579b16c5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:31:49 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 11 MiB/s wr, 76 op/s
Jan 29 12:31:49 np0005601226 nova_compute[239456]: 2026-01-29 17:31:49.996 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.037 239460 DEBUG nova.policy [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4f278bc1afe946ca991a0203a74c5a7f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:31:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:31:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/805931155' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.384 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.385 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.415 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.519 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.519 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.526 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.526 239460 INFO nova.compute.claims [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.553 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.554 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.555 239460 INFO nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Creating image(s)#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.555 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.556 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Ensure instance console log exists: /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.556 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.556 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.557 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.660 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.761 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:50.761 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:31:50 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:50.762 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:31:50 np0005601226 nova_compute[239456]: 2026-01-29 17:31:50.882 239460 DEBUG nova.network.neutron [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Successfully created port: 1a3a9194-8658-4eaa-940b-a73151c9d5cb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:31:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:31:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3987172923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.164 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.168 239460 DEBUG nova.compute.provider_tree [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.184 239460 DEBUG nova.scheduler.client.report [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.208 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.208 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.270 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.270 239460 DEBUG nova.network.neutron [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.291 239460 INFO nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.307 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.364 239460 INFO nova.virt.block_device [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Booting with volume 6b2672b1-9741-4acf-8227-c1aae3771a70 at /dev/vda#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.461 239460 DEBUG nova.policy [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3901089a059c4bdb8d0497398873d2f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '420f46ae230d4c529afe366a1b780921', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.508 239460 DEBUG os_brick.utils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.509 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.515 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.515 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbc43f3-a2c3-423a-983c-6b6a7a90669b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.517 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.521 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.521 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[6916caae-eb3a-4bcb-948f-e880b5b44fac]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.523 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.530 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.531 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[1d68fe16-11fa-454c-8cac-26d03b4b5194]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.532 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec3289a-b6f6-4531-a43d-69f8e15c347c]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.532 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.551 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.553 239460 DEBUG os_brick.initiator.connectors.lightos [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.554 239460 DEBUG os_brick.initiator.connectors.lightos [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.554 239460 DEBUG os_brick.initiator.connectors.lightos [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.555 239460 DEBUG os_brick.utils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.555 239460 DEBUG nova.virt.block_device [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updating existing volume attachment record: 62747fb9-c76f-41dd-9796-6eddb957df4c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.175018734816128e-06 of space, bias 1.0, pg target 0.0012525056204448384 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029478659967957913 of space, bias 1.0, pg target 0.8843597990387374 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.523943658187736e-06 of space, bias 1.0, pg target 0.0010571830974563207 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670339223796241 of space, bias 1.0, pg target 0.20011017671388723 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4475890572760234e-06 of space, bias 4.0, pg target 0.001737106868731228 quantized to 16 (current 16)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.740 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.749 239460 DEBUG nova.network.neutron [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Successfully updated port: 1a3a9194-8658-4eaa-940b-a73151c9d5cb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.770 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.770 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquired lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.770 239460 DEBUG nova.network.neutron [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.876 239460 DEBUG nova.compute.manager [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-changed-1a3a9194-8658-4eaa-940b-a73151c9d5cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.876 239460 DEBUG nova.compute.manager [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Refreshing instance network info cache due to event network-changed-1a3a9194-8658-4eaa-940b-a73151c9d5cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.876 239460 DEBUG oslo_concurrency.lockutils [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:51 np0005601226 nova_compute[239456]: 2026-01-29 17:31:51.909 239460 DEBUG nova.network.neutron [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:31:51 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 306 KiB/s rd, 11 MiB/s wr, 86 op/s
Jan 29 12:31:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.219 239460 DEBUG nova.network.neutron [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Successfully created port: 266f0d91-fecd-4c22-a0ff-80edcaec94fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:31:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:31:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1665163858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.673 239460 DEBUG nova.network.neutron [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Updating instance_info_cache with network_info: [{"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.707 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Releasing lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.708 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Instance network_info: |[{"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.708 239460 DEBUG oslo_concurrency.lockutils [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.708 239460 DEBUG nova.network.neutron [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Refreshing network info cache for port 1a3a9194-8658-4eaa-940b-a73151c9d5cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.711 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Start _get_guest_xml network_info=[{"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '69e8446c-472d-4799-8122-6d5b579b16c5', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48', 'attached_at': '', 'detached_at': '', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'serial': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.717 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.718 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.718 239460 INFO nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Creating image(s)#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.719 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.719 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Ensure instance console log exists: /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.719 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.719 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.720 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.720 239460 WARNING nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.724 239460 DEBUG nova.virt.libvirt.host [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.725 239460 DEBUG nova.virt.libvirt.host [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.728 239460 DEBUG nova.virt.libvirt.host [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.728 239460 DEBUG nova.virt.libvirt.host [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.729 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.729 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.729 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.729 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.730 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.730 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.730 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.730 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.730 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.731 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.731 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.731 239460 DEBUG nova.virt.hardware [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.749 239460 DEBUG nova.storage.rbd_utils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:52 np0005601226 nova_compute[239456]: 2026-01-29 17:31:52.752 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.166 239460 DEBUG nova.network.neutron [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Successfully updated port: 266f0d91-fecd-4c22-a0ff-80edcaec94fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.183 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.183 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquired lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.183 239460 DEBUG nova.network.neutron [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:31:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:31:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779275087' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.227 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.372 239460 DEBUG nova.network.neutron [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.377 239460 DEBUG os_brick.encryptors [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Using volume encryption metadata '{'encryption_key_id': 'ba7ebdbb-ea7c-4233-9468-333195d85442', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48', 'attached_at': '', 'detached_at': '', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.379 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.395 239460 DEBUG barbicanclient.v1.secrets [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/ba7ebdbb-ea7c-4233-9468-333195d85442 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.396 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.434 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.435 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.453 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.453 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.477 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.478 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.506 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.507 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.531 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.531 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.557 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.557 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.593 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.593 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.616 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.617 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.635 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.635 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.662 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.663 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.685 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.686 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.708 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.709 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.738 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.738 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:53.764 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.774 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.774 239460 INFO barbicanclient.base [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/ba7ebdbb-ea7c-4233-9468-333195d85442#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.800 239460 DEBUG barbicanclient.client [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.801 239460 DEBUG nova.virt.libvirt.host [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <volume>4000fad4-b5f6-4912-bea5-f20dff3f5ac9</volume>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:31:53 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:31:53 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.833 239460 DEBUG nova.virt.libvirt.vif [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:31:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2066484079',display_name='tempest-TransferEncryptedVolumeTest-server-2066484079',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2066484079',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuw1UleBgXrkrvixGGBn21sDEJH6+FrkgFvq6jv3D3khmeyc7tU6zH/hmJ8BmjXmJToJI+73AcA0H8QCIrilSaG34LfS65uhiBlMWUY7wThjQ0H0WSLw5MFEF4DjDh1dA==',key_name='tempest-TransferEncryptedVolumeTest-895765981',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-eje1y09i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:31:49Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.834 239460 DEBUG nova.network.os_vif_util [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.835 239460 DEBUG nova.network.os_vif_util [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:9b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a3a9194-8658-4eaa-940b-a73151c9d5cb,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a3a9194-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.837 239460 DEBUG nova.objects.instance [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.878 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <uuid>ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48</uuid>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <name>instance-00000017</name>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-2066484079</nova:name>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:31:52</nova:creationTime>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:user uuid="4f278bc1afe946ca991a0203a74c5a7f">tempest-TransferEncryptedVolumeTest-1262552887-project-member</nova:user>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:project uuid="c74297072cc041019fc7ff4bff1a0f08">tempest-TransferEncryptedVolumeTest-1262552887</nova:project>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <nova:port uuid="1a3a9194-8658-4eaa-940b-a73151c9d5cb">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <entry name="serial">ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48</entry>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <entry name="uuid">ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48</entry>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_disk.config">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-4000fad4-b5f6-4912-bea5-f20dff3f5ac9">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <serial>4000fad4-b5f6-4912-bea5-f20dff3f5ac9</serial>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <encryption format="luks">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:        <secret type="passphrase" uuid="62f04f2a-2f24-45f0-9ec6-9557e0a15676"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      </encryption>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:d5:9b:42"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <target dev="tap1a3a9194-86"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/console.log" append="off"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:31:53 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:31:53 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:31:53 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:31:53 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.879 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Preparing to wait for external event network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.879 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.880 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.880 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.880 239460 DEBUG nova.virt.libvirt.vif [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:31:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2066484079',display_name='tempest-TransferEncryptedVolumeTest-server-2066484079',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2066484079',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuw1UleBgXrkrvixGGBn21sDEJH6+FrkgFvq6jv3D3khmeyc7tU6zH/hmJ8BmjXmJToJI+73AcA0H8QCIrilSaG34LfS65uhiBlMWUY7wThjQ0H0WSLw5MFEF4DjDh1dA==',key_name='tempest-TransferEncryptedVolumeTest-895765981',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-eje1y09i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:31:49Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.881 239460 DEBUG nova.network.os_vif_util [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.881 239460 DEBUG nova.network.os_vif_util [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:9b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a3a9194-8658-4eaa-940b-a73151c9d5cb,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a3a9194-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.882 239460 DEBUG os_vif [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:9b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a3a9194-8658-4eaa-940b-a73151c9d5cb,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a3a9194-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.883 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.883 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.884 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.886 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.886 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a3a9194-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.886 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a3a9194-86, col_values=(('external_ids', {'iface-id': '1a3a9194-8658-4eaa-940b-a73151c9d5cb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:9b:42', 'vm-uuid': 'ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.887 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:53 np0005601226 NetworkManager[49020]: <info>  [1769707913.8889] manager: (tap1a3a9194-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.892 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.894 239460 INFO os_vif [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:9b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a3a9194-8658-4eaa-940b-a73151c9d5cb,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a3a9194-86')#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.945 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.945 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.946 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No VIF found with MAC fa:16:3e:d5:9b:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.946 239460 INFO nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Using config drive#033[00m
Jan 29 12:31:53 np0005601226 nova_compute[239456]: 2026-01-29 17:31:53.970 239460 DEBUG nova.storage.rbd_utils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:53 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 296 KiB/s rd, 8.4 MiB/s wr, 73 op/s
Jan 29 12:31:54 np0005601226 nova_compute[239456]: 2026-01-29 17:31:54.002 239460 DEBUG nova.compute.manager [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-changed-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:54 np0005601226 nova_compute[239456]: 2026-01-29 17:31:54.002 239460 DEBUG nova.compute.manager [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Refreshing instance network info cache due to event network-changed-266f0d91-fecd-4c22-a0ff-80edcaec94fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:31:54 np0005601226 nova_compute[239456]: 2026-01-29 17:31:54.003 239460 DEBUG oslo_concurrency.lockutils [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.066 239460 INFO nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Creating config drive at /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/disk.config#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.069 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx11new1b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.091 239460 DEBUG nova.network.neutron [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Updated VIF entry in instance network info cache for port 1a3a9194-8658-4eaa-940b-a73151c9d5cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.092 239460 DEBUG nova.network.neutron [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Updating instance_info_cache with network_info: [{"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.119 239460 DEBUG oslo_concurrency.lockutils [req-e95973ee-f5c8-4920-94fe-cb1b29ff76a5 req-fc0eecc8-6e6d-4c7f-ab73-7a03540d6d1f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.168 239460 DEBUG nova.network.neutron [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updating instance_info_cache with network_info: [{"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.190 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Releasing lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.190 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Instance network_info: |[{"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.191 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx11new1b" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.192 239460 DEBUG oslo_concurrency.lockutils [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.192 239460 DEBUG nova.network.neutron [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Refreshing network info cache for port 266f0d91-fecd-4c22-a0ff-80edcaec94fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.200 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Start _get_guest_xml network_info=[{"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '62747fb9-c76f-41dd-9796-6eddb957df4c', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6b2672b1-9741-4acf-8227-c1aae3771a70', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6b2672b1-9741-4acf-8227-c1aae3771a70', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '09e74043-3065-4c0b-bffa-930cc1a7f21f', 'attached_at': '', 'detached_at': '', 'volume_id': '6b2672b1-9741-4acf-8227-c1aae3771a70', 'serial': '6b2672b1-9741-4acf-8227-c1aae3771a70'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.229 239460 DEBUG nova.storage.rbd_utils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.233 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/disk.config ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.257 239460 WARNING nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.270 239460 DEBUG nova.virt.libvirt.host [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.271 239460 DEBUG nova.virt.libvirt.host [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.274 239460 DEBUG nova.virt.libvirt.host [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.275 239460 DEBUG nova.virt.libvirt.host [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.275 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.276 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.276 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.277 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.277 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.277 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.278 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.278 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.278 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.279 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.279 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.279 239460 DEBUG nova.virt.hardware [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.313 239460 DEBUG nova.storage.rbd_utils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 09e74043-3065-4c0b-bffa-930cc1a7f21f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.319 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.403 239460 DEBUG oslo_concurrency.processutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/disk.config ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.404 239460 INFO nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Deleting local config drive /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48/disk.config because it was imported into RBD.#033[00m
Jan 29 12:31:55 np0005601226 kernel: tap1a3a9194-86: entered promiscuous mode
Jan 29 12:31:55 np0005601226 NetworkManager[49020]: <info>  [1769707915.4476] manager: (tap1a3a9194-86): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.450 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:55Z|00211|binding|INFO|Claiming lport 1a3a9194-8658-4eaa-940b-a73151c9d5cb for this chassis.
Jan 29 12:31:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:55Z|00212|binding|INFO|1a3a9194-8658-4eaa-940b-a73151c9d5cb: Claiming fa:16:3e:d5:9b:42 10.100.0.10
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.460 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:9b:42 10.100.0.10'], port_security=['fa:16:3e:d5:9b:42 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b52d9814-61c4-42dd-84af-517b84e36907', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=1a3a9194-8658-4eaa-940b-a73151c9d5cb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.461 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 1a3a9194-8658-4eaa-940b-a73151c9d5cb in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 bound to our chassis#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.464 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25cf1715-f178-4f65-be7c-cf203c28f072#033[00m
Jan 29 12:31:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:55Z|00213|binding|INFO|Setting lport 1a3a9194-8658-4eaa-940b-a73151c9d5cb ovn-installed in OVS
Jan 29 12:31:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:55Z|00214|binding|INFO|Setting lport 1a3a9194-8658-4eaa-940b-a73151c9d5cb up in Southbound
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.472 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.474 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.475 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7d827e1c-c13c-41a1-9498-8f613a9081fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.477 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25cf1715-f1 in ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:31:55 np0005601226 systemd-machined[207561]: New machine qemu-23-instance-00000017.
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.480 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25cf1715-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.480 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfd9693-7b0f-4a36-9d23-842ac3d1aa20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.482 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ff2279-6138-4366-8f07-a39ae9e473fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.489 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[e86d9665-9599-43b8-ad1a-36f14cf7f59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Jan 29 12:31:55 np0005601226 systemd-udevd[269177]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.511 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[43ff34ed-bc07-41d3-a558-82d386263365]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 NetworkManager[49020]: <info>  [1769707915.5147] device (tap1a3a9194-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:31:55 np0005601226 NetworkManager[49020]: <info>  [1769707915.5159] device (tap1a3a9194-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.535 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[9b68f2da-912e-456f-b6a0-a77c6a4717d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 NetworkManager[49020]: <info>  [1769707915.5398] manager: (tap25cf1715-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.539 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[caef1160-da01-4594-9bfb-e85fd55d6c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.570 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[6b70796a-e573-4030-813f-4205fccd3d51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.574 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[cc85a1fd-bb2b-45e0-beaa-4cccc5016f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 NetworkManager[49020]: <info>  [1769707915.5929] device (tap25cf1715-f0): carrier: link connected
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.597 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[203a8327-b64b-4551-9f27-2d4a84decbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.616 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b87c42aa-7c54-461f-916f-92d26d8351b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526909, 'reachable_time': 26812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269208, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.627 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d3763121-696c-420d-8193-bcd99110afba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:50ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526909, 'tstamp': 526909}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269209, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.640 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe5075d-c87f-472a-abe8-18bbd538b12c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526909, 'reachable_time': 26812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269210, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.661 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd7bda6-a812-4bd1-9011-61dda6ee447d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.708 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e998eca9-5d9c-4dbd-a609-596f35c49229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.709 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.710 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.710 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25cf1715-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:55 np0005601226 NetworkManager[49020]: <info>  [1769707915.7133] manager: (tap25cf1715-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Jan 29 12:31:55 np0005601226 kernel: tap25cf1715-f0: entered promiscuous mode
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.714 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.716 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25cf1715-f0, col_values=(('external_ids', {'iface-id': '82a91bf5-9093-4cbd-bfe4-f5d4b5400077'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:55Z|00215|binding|INFO|Releasing lport 82a91bf5-9093-4cbd-bfe4-f5d4b5400077 from this chassis (sb_readonly=0)
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.718 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.719 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.721 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[955ac01d-d150-4058-8a24-adfc2d01edab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.722 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:31:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:55.724 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'env', 'PROCESS_TAG=haproxy-25cf1715-f178-4f65-be7c-cf203c28f072', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25cf1715-f178-4f65-be7c-cf203c28f072.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.725 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:31:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39540756' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.909 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.937 239460 DEBUG nova.virt.libvirt.vif [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-344724336',display_name='tempest-TestVolumeBootPattern-server-344724336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-344724336',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-tw5521qc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:31:51Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=09e74043-3065-4c0b-bffa-930cc1a7f21f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.937 239460 DEBUG nova.network.os_vif_util [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.938 239460 DEBUG nova.network.os_vif_util [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:3d:76,bridge_name='br-int',has_traffic_filtering=True,id=266f0d91-fecd-4c22-a0ff-80edcaec94fd,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap266f0d91-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.940 239460 DEBUG nova.objects.instance [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'pci_devices' on Instance uuid 09e74043-3065-4c0b-bffa-930cc1a7f21f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.954 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <uuid>09e74043-3065-4c0b-bffa-930cc1a7f21f</uuid>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <name>instance-00000018</name>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestVolumeBootPattern-server-344724336</nova:name>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:31:55</nova:creationTime>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:user uuid="3901089a059c4bdb8d0497398873d2f1">tempest-TestVolumeBootPattern-1871389491-project-member</nova:user>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:project uuid="420f46ae230d4c529afe366a1b780921">tempest-TestVolumeBootPattern-1871389491</nova:project>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <nova:port uuid="266f0d91-fecd-4c22-a0ff-80edcaec94fd">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <entry name="serial">09e74043-3065-4c0b-bffa-930cc1a7f21f</entry>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <entry name="uuid">09e74043-3065-4c0b-bffa-930cc1a7f21f</entry>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/09e74043-3065-4c0b-bffa-930cc1a7f21f_disk.config">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-6b2672b1-9741-4acf-8227-c1aae3771a70">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <serial>6b2672b1-9741-4acf-8227-c1aae3771a70</serial>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:f7:3d:76"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <target dev="tap266f0d91-fe"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/console.log" append="off"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:31:55 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:31:55 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:31:55 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:31:55 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.955 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Preparing to wait for external event network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.955 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.956 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.956 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.957 239460 DEBUG nova.virt.libvirt.vif [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-344724336',display_name='tempest-TestVolumeBootPattern-server-344724336',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-344724336',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-tw5521qc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:31:51Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=09e74043-3065-4c0b-bffa-930cc1a7f21f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.957 239460 DEBUG nova.network.os_vif_util [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.958 239460 DEBUG nova.network.os_vif_util [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:3d:76,bridge_name='br-int',has_traffic_filtering=True,id=266f0d91-fecd-4c22-a0ff-80edcaec94fd,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap266f0d91-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.958 239460 DEBUG os_vif [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:3d:76,bridge_name='br-int',has_traffic_filtering=True,id=266f0d91-fecd-4c22-a0ff-80edcaec94fd,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap266f0d91-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.959 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.959 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.959 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.962 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.962 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap266f0d91-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:55 np0005601226 nova_compute[239456]: 2026-01-29 17:31:55.962 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap266f0d91-fe, col_values=(('external_ids', {'iface-id': '266f0d91-fecd-4c22-a0ff-80edcaec94fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:3d:76', 'vm-uuid': '09e74043-3065-4c0b-bffa-930cc1a7f21f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:55 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 181 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Jan 29 12:31:56 np0005601226 NetworkManager[49020]: <info>  [1769707916.0148] manager: (tap266f0d91-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.014 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.015 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.024 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.029 239460 INFO os_vif [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:3d:76,bridge_name='br-int',has_traffic_filtering=True,id=266f0d91-fecd-4c22-a0ff-80edcaec94fd,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap266f0d91-fe')#033[00m
Jan 29 12:31:56 np0005601226 podman[269279]: 2026-01-29 17:31:56.063479973 +0000 UTC m=+0.054081156 container create d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.080 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.083 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.083 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] No VIF found with MAC fa:16:3e:f7:3d:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.083 239460 INFO nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Using config drive#033[00m
Jan 29 12:31:56 np0005601226 systemd[1]: Started libpod-conmon-d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156.scope.
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.102 239460 DEBUG nova.storage.rbd_utils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 09e74043-3065-4c0b-bffa-930cc1a7f21f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.113 239460 DEBUG nova.compute.manager [req-c0239b61-741b-4b13-b31a-e999ff8355a9 req-5f25dd94-adae-4539-9ba5-b74abf694fea 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.113 239460 DEBUG oslo_concurrency.lockutils [req-c0239b61-741b-4b13-b31a-e999ff8355a9 req-5f25dd94-adae-4539-9ba5-b74abf694fea 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.113 239460 DEBUG oslo_concurrency.lockutils [req-c0239b61-741b-4b13-b31a-e999ff8355a9 req-5f25dd94-adae-4539-9ba5-b74abf694fea 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.114 239460 DEBUG oslo_concurrency.lockutils [req-c0239b61-741b-4b13-b31a-e999ff8355a9 req-5f25dd94-adae-4539-9ba5-b74abf694fea 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.114 239460 DEBUG nova.compute.manager [req-c0239b61-741b-4b13-b31a-e999ff8355a9 req-5f25dd94-adae-4539-9ba5-b74abf694fea 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Processing event network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:31:56 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:31:56 np0005601226 podman[269279]: 2026-01-29 17:31:56.030457375 +0000 UTC m=+0.021058578 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:31:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4497bb9622c9cce906f8b9ee76d8d7b897dcb89851d569c6fe4a371df771d297/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:31:56 np0005601226 podman[269279]: 2026-01-29 17:31:56.149560261 +0000 UTC m=+0.140161474 container init d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:31:56 np0005601226 podman[269279]: 2026-01-29 17:31:56.163736723 +0000 UTC m=+0.154337906 container start d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:31:56 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[269310]: [NOTICE]   (269317) : New worker (269319) forked
Jan 29 12:31:56 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[269310]: [NOTICE]   (269317) : Loading success.
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.459 239460 INFO nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Creating config drive at /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/disk.config#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.462 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcvwpc1kb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.592 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcvwpc1kb" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.616 239460 DEBUG nova.storage.rbd_utils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] rbd image 09e74043-3065-4c0b-bffa-930cc1a7f21f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.622 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/disk.config 09e74043-3065-4c0b-bffa-930cc1a7f21f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.742 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.767 239460 DEBUG oslo_concurrency.processutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/disk.config 09e74043-3065-4c0b-bffa-930cc1a7f21f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.767 239460 INFO nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Deleting local config drive /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f/disk.config because it was imported into RBD.#033[00m
Jan 29 12:31:56 np0005601226 NetworkManager[49020]: <info>  [1769707916.8110] manager: (tap266f0d91-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Jan 29 12:31:56 np0005601226 kernel: tap266f0d91-fe: entered promiscuous mode
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.814 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:56Z|00216|binding|INFO|Claiming lport 266f0d91-fecd-4c22-a0ff-80edcaec94fd for this chassis.
Jan 29 12:31:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:56Z|00217|binding|INFO|266f0d91-fecd-4c22-a0ff-80edcaec94fd: Claiming fa:16:3e:f7:3d:76 10.100.0.9
Jan 29 12:31:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:56Z|00218|binding|INFO|Setting lport 266f0d91-fecd-4c22-a0ff-80edcaec94fd ovn-installed in OVS
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.820 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:3d:76 10.100.0.9'], port_security=['fa:16:3e:f7:3d:76 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '09e74043-3065-4c0b-bffa-930cc1a7f21f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cfba344-fbfc-404d-872d-d297b528124f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=266f0d91-fecd-4c22-a0ff-80edcaec94fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.823 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 266f0d91-fecd-4c22-a0ff-80edcaec94fd in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b bound to our chassis#033[00m
Jan 29 12:31:56 np0005601226 NetworkManager[49020]: <info>  [1769707916.8247] device (tap266f0d91-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:31:56 np0005601226 NetworkManager[49020]: <info>  [1769707916.8253] device (tap266f0d91-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.826 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.827 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:31:56Z|00219|binding|INFO|Setting lport 266f0d91-fecd-4c22-a0ff-80edcaec94fd up in Southbound
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.838 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.840 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[34eff1e3-01a5-45d0-88df-ae051bd79c4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.858 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd73303-31d7-40a3-b799-0cb5a7ce71c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.862 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[387563aa-e134-44e7-a598-4119f8303994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:56 np0005601226 systemd-machined[207561]: New machine qemu-24-instance-00000018.
Jan 29 12:31:56 np0005601226 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.877 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c2d368-0698-4f6f-8553-5af5bbc1fc1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.890 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[68fcdd53-b459-402d-8839-97d27304d093]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522039, 'reachable_time': 26044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269386, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.901 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e39fd2-3a4e-4e09-8cf4-a4c9133685f1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522047, 'tstamp': 522047}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269387, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522049, 'tstamp': 522049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269387, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.903 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.906 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 nova_compute[239456]: 2026-01-29 17:31:56.907 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.908 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.908 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.909 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:31:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:31:56.910 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.025 239460 DEBUG nova.network.neutron [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updated VIF entry in instance network info cache for port 266f0d91-fecd-4c22-a0ff-80edcaec94fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.026 239460 DEBUG nova.network.neutron [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updating instance_info_cache with network_info: [{"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.055 239460 DEBUG oslo_concurrency.lockutils [req-82702a2f-c62b-4374-8780-66c02ebd8955 req-083ac3ae-0deb-4647-bbe1-9806e095f397 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.157 239460 DEBUG nova.compute.manager [req-6b2136ff-068e-436f-a125-9463341ae3a0 req-4ab1359b-3cc1-444c-bfb6-b67a5f2aa7ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.158 239460 DEBUG oslo_concurrency.lockutils [req-6b2136ff-068e-436f-a125-9463341ae3a0 req-4ab1359b-3cc1-444c-bfb6-b67a5f2aa7ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.158 239460 DEBUG oslo_concurrency.lockutils [req-6b2136ff-068e-436f-a125-9463341ae3a0 req-4ab1359b-3cc1-444c-bfb6-b67a5f2aa7ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.159 239460 DEBUG oslo_concurrency.lockutils [req-6b2136ff-068e-436f-a125-9463341ae3a0 req-4ab1359b-3cc1-444c-bfb6-b67a5f2aa7ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.159 239460 DEBUG nova.compute.manager [req-6b2136ff-068e-436f-a125-9463341ae3a0 req-4ab1359b-3cc1-444c-bfb6-b67a5f2aa7ff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Processing event network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:31:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.828 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707917.828273, 09e74043-3065-4c0b-bffa-930cc1a7f21f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.829 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] VM Started (Lifecycle Event)#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.831 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.835 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.838 239460 INFO nova.virt.libvirt.driver [-] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Instance spawned successfully.#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.839 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.855 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.857 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.865 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.866 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.866 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.867 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.867 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.868 239460 DEBUG nova.virt.libvirt.driver [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.901 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.901 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707917.8283722, 09e74043-3065-4c0b-bffa-930cc1a7f21f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.901 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.943 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.946 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707917.8345578, 09e74043-3065-4c0b-bffa-930cc1a7f21f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.946 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.963 239460 INFO nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Took 5.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.964 239460 DEBUG nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.978 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:57 np0005601226 nova_compute[239456]: 2026-01-29 17:31:57.980 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:31:57 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 6.0 KiB/s wr, 19 op/s
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.045 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.085 239460 INFO nova.compute.manager [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Took 7.59 seconds to build instance.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.112 239460 DEBUG oslo_concurrency.lockutils [None req-62069307-811a-4660-a2ab-731b7c633b90 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.134 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707918.133467, ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.134 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] VM Started (Lifecycle Event)#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.136 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.139 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.142 239460 INFO nova.virt.libvirt.driver [-] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Instance spawned successfully.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.142 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.154 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.158 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.180 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.180 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.181 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.181 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.181 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.182 239460 DEBUG nova.virt.libvirt.driver [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.226 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.226 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707918.1335666, ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.227 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.256 239460 DEBUG nova.compute.manager [req-b7c12518-0c62-4302-a02a-89a9633821f2 req-acfdb835-4eb3-4822-ae6a-e10a3f17dcd4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.256 239460 DEBUG oslo_concurrency.lockutils [req-b7c12518-0c62-4302-a02a-89a9633821f2 req-acfdb835-4eb3-4822-ae6a-e10a3f17dcd4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.257 239460 DEBUG oslo_concurrency.lockutils [req-b7c12518-0c62-4302-a02a-89a9633821f2 req-acfdb835-4eb3-4822-ae6a-e10a3f17dcd4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.257 239460 DEBUG oslo_concurrency.lockutils [req-b7c12518-0c62-4302-a02a-89a9633821f2 req-acfdb835-4eb3-4822-ae6a-e10a3f17dcd4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.257 239460 DEBUG nova.compute.manager [req-b7c12518-0c62-4302-a02a-89a9633821f2 req-acfdb835-4eb3-4822-ae6a-e10a3f17dcd4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] No waiting events found dispatching network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.257 239460 WARNING nova.compute.manager [req-b7c12518-0c62-4302-a02a-89a9633821f2 req-acfdb835-4eb3-4822-ae6a-e10a3f17dcd4 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received unexpected event network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb for instance with vm_state building and task_state spawning.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.277 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.281 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707918.145305, ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.281 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.295 239460 INFO nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Took 7.74 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.295 239460 DEBUG nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.306 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.310 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.365 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.421 239460 INFO nova.compute.manager [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Took 10.19 seconds to build instance.#033[00m
Jan 29 12:31:58 np0005601226 nova_compute[239456]: 2026-01-29 17:31:58.445 239460 DEBUG oslo_concurrency.lockutils [None req-1c44c2ec-e8fe-4ab4-9100-c8c585c4f0f7 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:59 np0005601226 nova_compute[239456]: 2026-01-29 17:31:59.253 239460 DEBUG nova.compute.manager [req-440bc13a-bde9-4477-a2ad-9f2100d25b7e req-9620e736-2d2f-4f56-8a9f-471f9ecf9ca5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:31:59 np0005601226 nova_compute[239456]: 2026-01-29 17:31:59.253 239460 DEBUG oslo_concurrency.lockutils [req-440bc13a-bde9-4477-a2ad-9f2100d25b7e req-9620e736-2d2f-4f56-8a9f-471f9ecf9ca5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:31:59 np0005601226 nova_compute[239456]: 2026-01-29 17:31:59.254 239460 DEBUG oslo_concurrency.lockutils [req-440bc13a-bde9-4477-a2ad-9f2100d25b7e req-9620e736-2d2f-4f56-8a9f-471f9ecf9ca5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:31:59 np0005601226 nova_compute[239456]: 2026-01-29 17:31:59.254 239460 DEBUG oslo_concurrency.lockutils [req-440bc13a-bde9-4477-a2ad-9f2100d25b7e req-9620e736-2d2f-4f56-8a9f-471f9ecf9ca5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:31:59 np0005601226 nova_compute[239456]: 2026-01-29 17:31:59.254 239460 DEBUG nova.compute.manager [req-440bc13a-bde9-4477-a2ad-9f2100d25b7e req-9620e736-2d2f-4f56-8a9f-471f9ecf9ca5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] No waiting events found dispatching network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:31:59 np0005601226 nova_compute[239456]: 2026-01-29 17:31:59.254 239460 WARNING nova.compute.manager [req-440bc13a-bde9-4477-a2ad-9f2100d25b7e req-9620e736-2d2f-4f56-8a9f-471f9ecf9ca5 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received unexpected event network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd for instance with vm_state active and task_state None.#033[00m
Jan 29 12:31:59 np0005601226 podman[269442]: 2026-01-29 17:31:59.908803502 +0000 UTC m=+0.081762283 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:31:59 np0005601226 podman[269443]: 2026-01-29 17:31:59.914486295 +0000 UTC m=+0.081679180 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:31:59 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 30 KiB/s wr, 68 op/s
Jan 29 12:32:01 np0005601226 nova_compute[239456]: 2026-01-29 17:32:01.049 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:01 np0005601226 nova_compute[239456]: 2026-01-29 17:32:01.745 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:01 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 26 KiB/s wr, 115 op/s
Jan 29 12:32:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:03 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 140 op/s
Jan 29 12:32:04 np0005601226 nova_compute[239456]: 2026-01-29 17:32:04.249 239460 DEBUG nova.compute.manager [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-changed-1a3a9194-8658-4eaa-940b-a73151c9d5cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:04 np0005601226 nova_compute[239456]: 2026-01-29 17:32:04.250 239460 DEBUG nova.compute.manager [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Refreshing instance network info cache due to event network-changed-1a3a9194-8658-4eaa-940b-a73151c9d5cb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:32:04 np0005601226 nova_compute[239456]: 2026-01-29 17:32:04.251 239460 DEBUG oslo_concurrency.lockutils [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:04 np0005601226 nova_compute[239456]: 2026-01-29 17:32:04.251 239460 DEBUG oslo_concurrency.lockutils [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:04 np0005601226 nova_compute[239456]: 2026-01-29 17:32:04.252 239460 DEBUG nova.network.neutron [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Refreshing network info cache for port 1a3a9194-8658-4eaa-940b-a73151c9d5cb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:32:05 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 150 op/s
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.051 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.349 239460 DEBUG nova.compute.manager [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-changed-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.349 239460 DEBUG nova.compute.manager [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Refreshing instance network info cache due to event network-changed-266f0d91-fecd-4c22-a0ff-80edcaec94fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.350 239460 DEBUG oslo_concurrency.lockutils [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.350 239460 DEBUG oslo_concurrency.lockutils [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.351 239460 DEBUG nova.network.neutron [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Refreshing network info cache for port 266f0d91-fecd-4c22-a0ff-80edcaec94fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.433 239460 DEBUG nova.network.neutron [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Updated VIF entry in instance network info cache for port 1a3a9194-8658-4eaa-940b-a73151c9d5cb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.434 239460 DEBUG nova.network.neutron [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Updating instance_info_cache with network_info: [{"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.475 239460 DEBUG oslo_concurrency.lockutils [req-47551f8b-5399-4758-92e3-64fcf216b066 req-26346c0c-1e32-4924-abe0-9c0c2e25872f 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:06 np0005601226 nova_compute[239456]: 2026-01-29 17:32:06.747 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:07 np0005601226 nova_compute[239456]: 2026-01-29 17:32:07.984 239460 DEBUG nova.network.neutron [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updated VIF entry in instance network info cache for port 266f0d91-fecd-4c22-a0ff-80edcaec94fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:32:07 np0005601226 nova_compute[239456]: 2026-01-29 17:32:07.985 239460 DEBUG nova.network.neutron [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updating instance_info_cache with network_info: [{"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:07 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 283 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 25 KiB/s wr, 144 op/s
Jan 29 12:32:08 np0005601226 nova_compute[239456]: 2026-01-29 17:32:08.007 239460 DEBUG oslo_concurrency.lockutils [req-09a5bf7d-0f3d-4d7e-ae60-1609fadab94e req-ff1e12d0-0871-4c74-b9f5-672a86658e65 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:09 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:09Z|00052|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.9
Jan 29 12:32:09 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:09Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f7:3d:76 10.100.0.9
Jan 29 12:32:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 292 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 592 KiB/s wr, 173 op/s
Jan 29 12:32:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:32:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:32:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:32:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:32:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:32:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:32:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:10Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:9b:42 10.100.0.10
Jan 29 12:32:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:10Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:9b:42 10.100.0.10
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:32:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:32:11 np0005601226 nova_compute[239456]: 2026-01-29 17:32:11.081 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:11 np0005601226 podman[269625]: 2026-01-29 17:32:11.387642149 +0000 UTC m=+0.078420322 container create 66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:32:11 np0005601226 podman[269625]: 2026-01-29 17:32:11.327608003 +0000 UTC m=+0.018386196 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:32:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 29 12:32:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:32:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:32:11 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:32:11 np0005601226 systemd[1]: Started libpod-conmon-66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee.scope.
Jan 29 12:32:11 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:32:11 np0005601226 podman[269625]: 2026-01-29 17:32:11.6220369 +0000 UTC m=+0.312815083 container init 66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:32:11 np0005601226 podman[269625]: 2026-01-29 17:32:11.628027321 +0000 UTC m=+0.318805494 container start 66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_darwin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:32:11 np0005601226 podman[269625]: 2026-01-29 17:32:11.633338794 +0000 UTC m=+0.324116967 container attach 66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_darwin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 29 12:32:11 np0005601226 unruffled_darwin[269641]: 167 167
Jan 29 12:32:11 np0005601226 systemd[1]: libpod-66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee.scope: Deactivated successfully.
Jan 29 12:32:11 np0005601226 podman[269646]: 2026-01-29 17:32:11.698178669 +0000 UTC m=+0.042005712 container died 66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:32:11 np0005601226 systemd[1]: var-lib-containers-storage-overlay-3d782215ad77878540b6a5507b4e8ea53b8db4bcdd89e41ef25af6b21063f556-merged.mount: Deactivated successfully.
Jan 29 12:32:11 np0005601226 nova_compute[239456]: 2026-01-29 17:32:11.750 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:11 np0005601226 podman[269646]: 2026-01-29 17:32:11.77844659 +0000 UTC m=+0.122273623 container remove 66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:32:11 np0005601226 systemd[1]: libpod-conmon-66502cbf2001f667cef2b54c287fc3a25251c33dc3c8d0297cfcb661be31dcee.scope: Deactivated successfully.
Jan 29 12:32:11 np0005601226 podman[269668]: 2026-01-29 17:32:11.953274166 +0000 UTC m=+0.063018977 container create 194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 12:32:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 330 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Jan 29 12:32:12 np0005601226 podman[269668]: 2026-01-29 17:32:11.914468251 +0000 UTC m=+0.024213142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:32:12 np0005601226 systemd[1]: Started libpod-conmon-194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6.scope.
Jan 29 12:32:12 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:32:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6817a1b6356c07b543b9521f45655c399be7ccdab311c21e1714cb1d7b0768/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6817a1b6356c07b543b9521f45655c399be7ccdab311c21e1714cb1d7b0768/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6817a1b6356c07b543b9521f45655c399be7ccdab311c21e1714cb1d7b0768/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6817a1b6356c07b543b9521f45655c399be7ccdab311c21e1714cb1d7b0768/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:12 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6817a1b6356c07b543b9521f45655c399be7ccdab311c21e1714cb1d7b0768/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:12 np0005601226 podman[269668]: 2026-01-29 17:32:12.084277263 +0000 UTC m=+0.194022085 container init 194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 29 12:32:12 np0005601226 podman[269668]: 2026-01-29 17:32:12.090281125 +0000 UTC m=+0.200025936 container start 194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:32:12 np0005601226 podman[269668]: 2026-01-29 17:32:12.121171296 +0000 UTC m=+0.230916117 container attach 194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_goldwasser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:32:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:12 np0005601226 gallant_goldwasser[269685]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:32:12 np0005601226 gallant_goldwasser[269685]: --> All data devices are unavailable
Jan 29 12:32:12 np0005601226 systemd[1]: libpod-194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6.scope: Deactivated successfully.
Jan 29 12:32:12 np0005601226 podman[269668]: 2026-01-29 17:32:12.560683548 +0000 UTC m=+0.670428359 container died 194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_goldwasser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:32:12 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ba6817a1b6356c07b543b9521f45655c399be7ccdab311c21e1714cb1d7b0768-merged.mount: Deactivated successfully.
Jan 29 12:32:12 np0005601226 podman[269668]: 2026-01-29 17:32:12.60829646 +0000 UTC m=+0.718041261 container remove 194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:32:12 np0005601226 systemd[1]: libpod-conmon-194b96b8a0878c90daea7be97817662f2c2ebe5ba3d1d0fb007b42a3781cc3b6.scope: Deactivated successfully.
Jan 29 12:32:12 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:12Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.11 does not match offer 10.100.0.9
Jan 29 12:32:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:12Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f7:3d:76 10.100.0.9
Jan 29 12:32:13 np0005601226 podman[269782]: 2026-01-29 17:32:13.032831759 +0000 UTC m=+0.045946098 container create ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 12:32:13 np0005601226 systemd[1]: Started libpod-conmon-ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5.scope.
Jan 29 12:32:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:32:13 np0005601226 podman[269782]: 2026-01-29 17:32:13.011485934 +0000 UTC m=+0.024600313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:32:13 np0005601226 podman[269782]: 2026-01-29 17:32:13.113531522 +0000 UTC m=+0.126645921 container init ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 29 12:32:13 np0005601226 podman[269782]: 2026-01-29 17:32:13.118052513 +0000 UTC m=+0.131166842 container start ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:32:13 np0005601226 hardcore_colden[269798]: 167 167
Jan 29 12:32:13 np0005601226 podman[269782]: 2026-01-29 17:32:13.123673674 +0000 UTC m=+0.136788093 container attach ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:32:13 np0005601226 systemd[1]: libpod-ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5.scope: Deactivated successfully.
Jan 29 12:32:13 np0005601226 podman[269782]: 2026-01-29 17:32:13.124811645 +0000 UTC m=+0.137925984 container died ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:32:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-dfd27790a716e9b42e6a6c80cf23a9170c16c6ed46ae3ac9387282fca6cfd8ee-merged.mount: Deactivated successfully.
Jan 29 12:32:13 np0005601226 podman[269782]: 2026-01-29 17:32:13.169478228 +0000 UTC m=+0.182592557 container remove ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_colden, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:32:13 np0005601226 systemd[1]: libpod-conmon-ae49c243cca5ad31c549f47189201f207cf27d095bf0628e841ce60c5c8906c5.scope: Deactivated successfully.
Jan 29 12:32:13 np0005601226 podman[269822]: 2026-01-29 17:32:13.350275445 +0000 UTC m=+0.066246794 container create e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_yalow, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:32:13 np0005601226 systemd[1]: Started libpod-conmon-e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9.scope.
Jan 29 12:32:13 np0005601226 podman[269822]: 2026-01-29 17:32:13.321230493 +0000 UTC m=+0.037201892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:32:13 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:32:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f205ee91732093c16d9c7bc716c8bbb2c6f2530c457fbf1827cc51f15676244e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f205ee91732093c16d9c7bc716c8bbb2c6f2530c457fbf1827cc51f15676244e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f205ee91732093c16d9c7bc716c8bbb2c6f2530c457fbf1827cc51f15676244e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:13 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f205ee91732093c16d9c7bc716c8bbb2c6f2530c457fbf1827cc51f15676244e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:13 np0005601226 podman[269822]: 2026-01-29 17:32:13.459078444 +0000 UTC m=+0.175049853 container init e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:32:13 np0005601226 podman[269822]: 2026-01-29 17:32:13.469294949 +0000 UTC m=+0.185266298 container start e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:32:13 np0005601226 podman[269822]: 2026-01-29 17:32:13.473019689 +0000 UTC m=+0.188991028 container attach e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_yalow, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:32:13 np0005601226 angry_yalow[269839]: {
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:    "0": [
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:        {
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "devices": [
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "/dev/loop3"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            ],
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_name": "ceph_lv0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_size": "21470642176",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "name": "ceph_lv0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "tags": {
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cluster_name": "ceph",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.crush_device_class": "",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.encrypted": "0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.objectstore": "bluestore",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osd_id": "0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.type": "block",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.vdo": "0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.with_tpm": "0"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            },
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "type": "block",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "vg_name": "ceph_vg0"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:        }
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:    ],
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:    "1": [
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:        {
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "devices": [
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "/dev/loop4"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            ],
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_name": "ceph_lv1",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_size": "21470642176",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "name": "ceph_lv1",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "tags": {
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cluster_name": "ceph",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.crush_device_class": "",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.encrypted": "0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.objectstore": "bluestore",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osd_id": "1",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.type": "block",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.vdo": "0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.with_tpm": "0"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            },
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "type": "block",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "vg_name": "ceph_vg1"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:        }
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:    ],
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:    "2": [
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:        {
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "devices": [
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "/dev/loop5"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            ],
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_name": "ceph_lv2",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_size": "21470642176",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "name": "ceph_lv2",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "tags": {
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.cluster_name": "ceph",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.crush_device_class": "",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.encrypted": "0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.objectstore": "bluestore",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osd_id": "2",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.type": "block",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.vdo": "0",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:                "ceph.with_tpm": "0"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            },
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "type": "block",
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:            "vg_name": "ceph_vg2"
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:        }
Jan 29 12:32:13 np0005601226 angry_yalow[269839]:    ]
Jan 29 12:32:13 np0005601226 angry_yalow[269839]: }
Jan 29 12:32:13 np0005601226 systemd[1]: libpod-e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9.scope: Deactivated successfully.
Jan 29 12:32:13 np0005601226 podman[269848]: 2026-01-29 17:32:13.855734302 +0000 UTC m=+0.044289704 container died e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_yalow, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 12:32:13 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f205ee91732093c16d9c7bc716c8bbb2c6f2530c457fbf1827cc51f15676244e-merged.mount: Deactivated successfully.
Jan 29 12:32:13 np0005601226 podman[269848]: 2026-01-29 17:32:13.901582657 +0000 UTC m=+0.090137969 container remove e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:32:13 np0005601226 systemd[1]: libpod-conmon-e6fbdb4e7e874df29c21532b5a7dc48306aa9c6d5c2b007688e6739cc8956ad9.scope: Deactivated successfully.
Jan 29 12:32:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 344 MiB data, 597 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 136 op/s
Jan 29 12:32:14 np0005601226 podman[269925]: 2026-01-29 17:32:14.358663051 +0000 UTC m=+0.037183991 container create 112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:32:14 np0005601226 systemd[1]: Started libpod-conmon-112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7.scope.
Jan 29 12:32:14 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:32:14 np0005601226 podman[269925]: 2026-01-29 17:32:14.425602483 +0000 UTC m=+0.104123463 container init 112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:32:14 np0005601226 podman[269925]: 2026-01-29 17:32:14.430078684 +0000 UTC m=+0.108599634 container start 112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:32:14 np0005601226 zen_wilbur[269942]: 167 167
Jan 29 12:32:14 np0005601226 systemd[1]: libpod-112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7.scope: Deactivated successfully.
Jan 29 12:32:14 np0005601226 podman[269925]: 2026-01-29 17:32:14.43362943 +0000 UTC m=+0.112150390 container attach 112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:32:14 np0005601226 podman[269925]: 2026-01-29 17:32:14.43402858 +0000 UTC m=+0.112549530 container died 112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:32:14 np0005601226 podman[269925]: 2026-01-29 17:32:14.345455296 +0000 UTC m=+0.023976256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:32:14 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b4e387f7b0f36c271271b6277939d67377cdeb07efcd855aafbd644af2d6fd26-merged.mount: Deactivated successfully.
Jan 29 12:32:14 np0005601226 podman[269925]: 2026-01-29 17:32:14.466648768 +0000 UTC m=+0.145169698 container remove 112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wilbur, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:32:14 np0005601226 systemd[1]: libpod-conmon-112237c194cdf171eae2b9494af8db74bbb8a100b095265f7be92caf18efcdb7.scope: Deactivated successfully.
Jan 29 12:32:14 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:14Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:3d:76 10.100.0.9
Jan 29 12:32:14 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:14Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:3d:76 10.100.0.9
Jan 29 12:32:14 np0005601226 podman[269965]: 2026-01-29 17:32:14.625050642 +0000 UTC m=+0.072731809 container create ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 29 12:32:14 np0005601226 systemd[1]: Started libpod-conmon-ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7.scope.
Jan 29 12:32:14 np0005601226 podman[269965]: 2026-01-29 17:32:14.572309503 +0000 UTC m=+0.019990690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:32:14 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:32:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b9b38825994ff786ab946573598e5eb57d8092a11bca161a2f59be2eadac8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b9b38825994ff786ab946573598e5eb57d8092a11bca161a2f59be2eadac8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b9b38825994ff786ab946573598e5eb57d8092a11bca161a2f59be2eadac8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:14 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6b9b38825994ff786ab946573598e5eb57d8092a11bca161a2f59be2eadac8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:14 np0005601226 podman[269965]: 2026-01-29 17:32:14.698452739 +0000 UTC m=+0.146133906 container init ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:32:14 np0005601226 podman[269965]: 2026-01-29 17:32:14.704387108 +0000 UTC m=+0.152068265 container start ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:32:14 np0005601226 podman[269965]: 2026-01-29 17:32:14.706865885 +0000 UTC m=+0.154547062 container attach ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:32:15 np0005601226 lvm[270058]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:32:15 np0005601226 lvm[270058]: VG ceph_vg0 finished
Jan 29 12:32:15 np0005601226 lvm[270061]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:32:15 np0005601226 lvm[270061]: VG ceph_vg1 finished
Jan 29 12:32:15 np0005601226 lvm[270063]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:32:15 np0005601226 lvm[270063]: VG ceph_vg2 finished
Jan 29 12:32:15 np0005601226 recursing_roentgen[269982]: {}
Jan 29 12:32:15 np0005601226 systemd[1]: libpod-ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7.scope: Deactivated successfully.
Jan 29 12:32:15 np0005601226 systemd[1]: libpod-ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7.scope: Consumed 1.064s CPU time.
Jan 29 12:32:15 np0005601226 podman[269965]: 2026-01-29 17:32:15.506166963 +0000 UTC m=+0.953848210 container died ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 29 12:32:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay-af6b9b38825994ff786ab946573598e5eb57d8092a11bca161a2f59be2eadac8-merged.mount: Deactivated successfully.
Jan 29 12:32:15 np0005601226 podman[269965]: 2026-01-29 17:32:15.569703303 +0000 UTC m=+1.017384490 container remove ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_roentgen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:32:15 np0005601226 systemd[1]: libpod-conmon-ff75c1b819c26065aa8be40f66be3d1c16591f5f4c241880579b03cd93f661c7.scope: Deactivated successfully.
Jan 29 12:32:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:32:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:32:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:32:15 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:32:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 366 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 134 op/s
Jan 29 12:32:16 np0005601226 nova_compute[239456]: 2026-01-29 17:32:16.083 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:16 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:32:16 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:32:16 np0005601226 nova_compute[239456]: 2026-01-29 17:32:16.753 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 366 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.3 MiB/s wr, 124 op/s
Jan 29 12:32:19 np0005601226 nova_compute[239456]: 2026-01-29 17:32:19.959 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:19 np0005601226 nova_compute[239456]: 2026-01-29 17:32:19.959 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:19 np0005601226 nova_compute[239456]: 2026-01-29 17:32:19.959 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:19 np0005601226 nova_compute[239456]: 2026-01-29 17:32:19.959 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:19 np0005601226 nova_compute[239456]: 2026-01-29 17:32:19.960 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:19 np0005601226 nova_compute[239456]: 2026-01-29 17:32:19.960 239460 INFO nova.compute.manager [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Terminating instance#033[00m
Jan 29 12:32:19 np0005601226 nova_compute[239456]: 2026-01-29 17:32:19.961 239460 DEBUG nova.compute.manager [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:32:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 370 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 6.4 MiB/s wr, 127 op/s
Jan 29 12:32:20 np0005601226 kernel: tap1a3a9194-86 (unregistering): left promiscuous mode
Jan 29 12:32:20 np0005601226 NetworkManager[49020]: <info>  [1769707940.0212] device (tap1a3a9194-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.039 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:20Z|00220|binding|INFO|Releasing lport 1a3a9194-8658-4eaa-940b-a73151c9d5cb from this chassis (sb_readonly=0)
Jan 29 12:32:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:20Z|00221|binding|INFO|Setting lport 1a3a9194-8658-4eaa-940b-a73151c9d5cb down in Southbound
Jan 29 12:32:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:20Z|00222|binding|INFO|Removing iface tap1a3a9194-86 ovn-installed in OVS
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.043 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.049 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:9b:42 10.100.0.10'], port_security=['fa:16:3e:d5:9b:42 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b52d9814-61c4-42dd-84af-517b84e36907', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=1a3a9194-8658-4eaa-940b-a73151c9d5cb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.052 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 1a3a9194-8658-4eaa-940b-a73151c9d5cb in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 unbound from our chassis#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.054 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.056 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25cf1715-f178-4f65-be7c-cf203c28f072, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.057 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8255d51d-a516-4ace-8512-9f329c08d437]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.058 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace which is not needed anymore#033[00m
Jan 29 12:32:20 np0005601226 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 29 12:32:20 np0005601226 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 14.564s CPU time.
Jan 29 12:32:20 np0005601226 systemd-machined[207561]: Machine qemu-23-instance-00000017 terminated.
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.199 239460 INFO nova.virt.libvirt.driver [-] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Instance destroyed successfully.#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.199 239460 DEBUG nova.objects.instance [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'resources' on Instance uuid ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:32:20 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[269310]: [NOTICE]   (269317) : haproxy version is 2.8.14-c23fe91
Jan 29 12:32:20 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[269310]: [NOTICE]   (269317) : path to executable is /usr/sbin/haproxy
Jan 29 12:32:20 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[269310]: [WARNING]  (269317) : Exiting Master process...
Jan 29 12:32:20 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[269310]: [ALERT]    (269317) : Current worker (269319) exited with code 143 (Terminated)
Jan 29 12:32:20 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[269310]: [WARNING]  (269317) : All workers exited. Exiting... (0)
Jan 29 12:32:20 np0005601226 systemd[1]: libpod-d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156.scope: Deactivated successfully.
Jan 29 12:32:20 np0005601226 conmon[269310]: conmon d076377caa1ff978837a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156.scope/container/memory.events
Jan 29 12:32:20 np0005601226 podman[270129]: 2026-01-29 17:32:20.215718145 +0000 UTC m=+0.054431146 container died d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.215 239460 DEBUG nova.virt.libvirt.vif [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:31:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2066484079',display_name='tempest-TransferEncryptedVolumeTest-server-2066484079',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2066484079',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuw1UleBgXrkrvixGGBn21sDEJH6+FrkgFvq6jv3D3khmeyc7tU6zH/hmJ8BmjXmJToJI+73AcA0H8QCIrilSaG34LfS65uhiBlMWUY7wThjQ0H0WSLw5MFEF4DjDh1dA==',key_name='tempest-TransferEncryptedVolumeTest-895765981',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:31:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-eje1y09i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:31:58Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.216 239460 DEBUG nova.network.os_vif_util [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "address": "fa:16:3e:d5:9b:42", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a3a9194-86", "ovs_interfaceid": "1a3a9194-8658-4eaa-940b-a73151c9d5cb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.216 239460 DEBUG nova.network.os_vif_util [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:9b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a3a9194-8658-4eaa-940b-a73151c9d5cb,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a3a9194-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.216 239460 DEBUG os_vif [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:9b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a3a9194-8658-4eaa-940b-a73151c9d5cb,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a3a9194-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.218 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.218 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a3a9194-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.266 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.268 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.270 239460 INFO os_vif [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:9b:42,bridge_name='br-int',has_traffic_filtering=True,id=1a3a9194-8658-4eaa-940b-a73151c9d5cb,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a3a9194-86')#033[00m
Jan 29 12:32:20 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156-userdata-shm.mount: Deactivated successfully.
Jan 29 12:32:20 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4497bb9622c9cce906f8b9ee76d8d7b897dcb89851d569c6fe4a371df771d297-merged.mount: Deactivated successfully.
Jan 29 12:32:20 np0005601226 podman[270129]: 2026-01-29 17:32:20.291012162 +0000 UTC m=+0.129725163 container cleanup d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:32:20 np0005601226 systemd[1]: libpod-conmon-d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156.scope: Deactivated successfully.
Jan 29 12:32:20 np0005601226 podman[270186]: 2026-01-29 17:32:20.350946526 +0000 UTC m=+0.045277940 container remove d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.355 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[66e78d24-d905-4290-9996-3a916a644493]: (4, ('Thu Jan 29 05:32:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156)\nd076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156\nThu Jan 29 05:32:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (d076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156)\nd076377caa1ff978837ae0ba0a0c89a6b01159f5290937bd71a52a4ac3e52156\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.357 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2493d0f9-4d04-4a85-895d-f6fbec8e7a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.358 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.360 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 kernel: tap25cf1715-f0: left promiscuous mode
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.368 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.371 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4600cabd-8d20-45dd-bdbe-cee92ea048c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.386 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[02b2bce2-ce0a-477a-849b-0e6e779888b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.387 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ead614-2ed0-4593-a62b-6b5b6bd9f602]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.399 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[72fc98ff-cd10-402a-87ff-56b18defbb11]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526903, 'reachable_time': 23310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270204, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.402 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:32:20 np0005601226 systemd[1]: run-netns-ovnmeta\x2d25cf1715\x2df178\x2d4f65\x2dbe7c\x2dcf203c28f072.mount: Deactivated successfully.
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.402 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[e0db2f25-293e-427a-a477-afa0d24e7cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.414 239460 DEBUG nova.compute.manager [req-fe0524a1-00b4-4107-8fc9-35f90a47b3bf req-3c8374de-f809-4bdb-bd55-1c73ea75a43b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-vif-unplugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.415 239460 DEBUG oslo_concurrency.lockutils [req-fe0524a1-00b4-4107-8fc9-35f90a47b3bf req-3c8374de-f809-4bdb-bd55-1c73ea75a43b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.415 239460 DEBUG oslo_concurrency.lockutils [req-fe0524a1-00b4-4107-8fc9-35f90a47b3bf req-3c8374de-f809-4bdb-bd55-1c73ea75a43b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.416 239460 DEBUG oslo_concurrency.lockutils [req-fe0524a1-00b4-4107-8fc9-35f90a47b3bf req-3c8374de-f809-4bdb-bd55-1c73ea75a43b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.416 239460 DEBUG nova.compute.manager [req-fe0524a1-00b4-4107-8fc9-35f90a47b3bf req-3c8374de-f809-4bdb-bd55-1c73ea75a43b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] No waiting events found dispatching network-vif-unplugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.416 239460 DEBUG nova.compute.manager [req-fe0524a1-00b4-4107-8fc9-35f90a47b3bf req-3c8374de-f809-4bdb-bd55-1c73ea75a43b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-vif-unplugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.435 239460 INFO nova.virt.libvirt.driver [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Deleting instance files /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_del#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.436 239460 INFO nova.virt.libvirt.driver [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Deletion of /var/lib/nova/instances/ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48_del complete#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.493 239460 INFO nova.compute.manager [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Took 0.53 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.494 239460 DEBUG oslo.service.loopingcall [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.494 239460 DEBUG nova.compute.manager [-] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.494 239460 DEBUG nova.network.neutron [-] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.916 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.917 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.917 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.917 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.917 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.918 239460 INFO nova.compute.manager [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Terminating instance#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.919 239460 DEBUG nova.compute.manager [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:32:20 np0005601226 kernel: tap266f0d91-fe (unregistering): left promiscuous mode
Jan 29 12:32:20 np0005601226 NetworkManager[49020]: <info>  [1769707940.9736] device (tap266f0d91-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:32:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:20Z|00223|binding|INFO|Releasing lport 266f0d91-fecd-4c22-a0ff-80edcaec94fd from this chassis (sb_readonly=0)
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.978 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:20Z|00224|binding|INFO|Setting lport 266f0d91-fecd-4c22-a0ff-80edcaec94fd down in Southbound
Jan 29 12:32:20 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:20Z|00225|binding|INFO|Removing iface tap266f0d91-fe ovn-installed in OVS
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.980 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.985 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:3d:76 10.100.0.9'], port_security=['fa:16:3e:f7:3d:76 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '09e74043-3065-4c0b-bffa-930cc1a7f21f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cfba344-fbfc-404d-872d-d297b528124f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=266f0d91-fecd-4c22-a0ff-80edcaec94fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:32:20 np0005601226 nova_compute[239456]: 2026-01-29 17:32:20.986 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.986 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 266f0d91-fecd-4c22-a0ff-80edcaec94fd in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b unbound from our chassis#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.987 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b#033[00m
Jan 29 12:32:20 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:20.996 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5d4817da-3ec6-4e14-9ed4-4125e84f77cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:21 np0005601226 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 29 12:32:21 np0005601226 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 13.247s CPU time.
Jan 29 12:32:21 np0005601226 systemd-machined[207561]: Machine qemu-24-instance-00000018 terminated.
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.025 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[18afc143-db63-485a-aa9c-4a95e0ed812f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.029 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd4850e-d192-4688-a9e7-21bd1cf0fed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.053 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[fd1857bc-205f-4c78-a510-f41aeaa09659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.069 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[515008e6-17b5-4294-82c6-dd427bceda10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c08c304-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:51:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522039, 'reachable_time': 26044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270214, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.083 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[0a93e52a-4e60-405e-b00b-22924bc3306b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522047, 'tstamp': 522047}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270215, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3c08c304-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522049, 'tstamp': 522049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270215, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.084 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.086 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.089 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.089 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c08c304-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.090 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.090 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c08c304-20, col_values=(('external_ids', {'iface-id': '4f9b16f1-6965-486d-bc02-ab1e4969963e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:21 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:21.090 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.149 239460 INFO nova.virt.libvirt.driver [-] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Instance destroyed successfully.#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.150 239460 DEBUG nova.objects.instance [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'resources' on Instance uuid 09e74043-3065-4c0b-bffa-930cc1a7f21f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.178 239460 DEBUG nova.virt.libvirt.vif [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:31:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-344724336',display_name='tempest-TestVolumeBootPattern-server-344724336',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-344724336',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:31:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-tw5521qc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:31:58Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=09e74043-3065-4c0b-bffa-930cc1a7f21f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.179 239460 DEBUG nova.network.os_vif_util [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "address": "fa:16:3e:f7:3d:76", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap266f0d91-fe", "ovs_interfaceid": "266f0d91-fecd-4c22-a0ff-80edcaec94fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.179 239460 DEBUG nova.network.os_vif_util [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:3d:76,bridge_name='br-int',has_traffic_filtering=True,id=266f0d91-fecd-4c22-a0ff-80edcaec94fd,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap266f0d91-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.179 239460 DEBUG os_vif [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:3d:76,bridge_name='br-int',has_traffic_filtering=True,id=266f0d91-fecd-4c22-a0ff-80edcaec94fd,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap266f0d91-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.180 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.181 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap266f0d91-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.184 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.185 239460 INFO os_vif [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:3d:76,bridge_name='br-int',has_traffic_filtering=True,id=266f0d91-fecd-4c22-a0ff-80edcaec94fd,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap266f0d91-fe')#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.215 239460 DEBUG nova.compute.manager [req-15e87f85-0a2d-412a-9283-ca9ccdc76267 req-fc02e9f8-46d3-4b7a-b6a6-029b9c04b3fc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-vif-unplugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.215 239460 DEBUG oslo_concurrency.lockutils [req-15e87f85-0a2d-412a-9283-ca9ccdc76267 req-fc02e9f8-46d3-4b7a-b6a6-029b9c04b3fc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.215 239460 DEBUG oslo_concurrency.lockutils [req-15e87f85-0a2d-412a-9283-ca9ccdc76267 req-fc02e9f8-46d3-4b7a-b6a6-029b9c04b3fc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.216 239460 DEBUG oslo_concurrency.lockutils [req-15e87f85-0a2d-412a-9283-ca9ccdc76267 req-fc02e9f8-46d3-4b7a-b6a6-029b9c04b3fc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.216 239460 DEBUG nova.compute.manager [req-15e87f85-0a2d-412a-9283-ca9ccdc76267 req-fc02e9f8-46d3-4b7a-b6a6-029b9c04b3fc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] No waiting events found dispatching network-vif-unplugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.216 239460 DEBUG nova.compute.manager [req-15e87f85-0a2d-412a-9283-ca9ccdc76267 req-fc02e9f8-46d3-4b7a-b6a6-029b9c04b3fc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-vif-unplugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.323 239460 INFO nova.virt.libvirt.driver [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Deleting instance files /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f_del#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.324 239460 INFO nova.virt.libvirt.driver [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Deletion of /var/lib/nova/instances/09e74043-3065-4c0b-bffa-930cc1a7f21f_del complete#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.394 239460 INFO nova.compute.manager [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.395 239460 DEBUG oslo.service.loopingcall [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.395 239460 DEBUG nova.compute.manager [-] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.396 239460 DEBUG nova.network.neutron [-] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.757 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.859 239460 DEBUG nova.network.neutron [-] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:21 np0005601226 nova_compute[239456]: 2026-01-29 17:32:21.880 239460 INFO nova.compute.manager [-] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Took 1.39 seconds to deallocate network for instance.#033[00m
Jan 29 12:32:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 370 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.8 MiB/s wr, 114 op/s
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.160 239460 INFO nova.compute.manager [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Took 0.28 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.167 239460 DEBUG nova.network.neutron [-] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.200 239460 INFO nova.compute.manager [-] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Took 0.80 seconds to deallocate network for instance.#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.204 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.205 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.289433) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707942289491, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1939, "num_deletes": 253, "total_data_size": 3084080, "memory_usage": 3137024, "flush_reason": "Manual Compaction"}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.295 239460 DEBUG oslo_concurrency.processutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707942303590, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1846320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31814, "largest_seqno": 33752, "table_properties": {"data_size": 1839713, "index_size": 3425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17655, "raw_average_key_size": 21, "raw_value_size": 1824918, "raw_average_value_size": 2190, "num_data_blocks": 155, "num_entries": 833, "num_filter_entries": 833, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707758, "oldest_key_time": 1769707758, "file_creation_time": 1769707942, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14221 microseconds, and 6131 cpu microseconds.
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.303651) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1846320 bytes OK
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.303673) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.312375) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.312397) EVENT_LOG_v1 {"time_micros": 1769707942312390, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.312417) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3075774, prev total WAL file size 3075774, number of live WAL files 2.
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.313239) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303035' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1803KB)], [65(10MB)]
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707942313296, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13189648, "oldest_snapshot_seqno": -1}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6580 keys, 10960908 bytes, temperature: kUnknown
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707942388547, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10960908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10911685, "index_size": 31639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 164515, "raw_average_key_size": 25, "raw_value_size": 10788369, "raw_average_value_size": 1639, "num_data_blocks": 1279, "num_entries": 6580, "num_filter_entries": 6580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707942, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.388759) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10960908 bytes
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.395297) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.1 rd, 145.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(13.1) write-amplify(5.9) OK, records in: 7015, records dropped: 435 output_compression: NoCompression
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.395311) EVENT_LOG_v1 {"time_micros": 1769707942395305, "job": 36, "event": "compaction_finished", "compaction_time_micros": 75329, "compaction_time_cpu_micros": 17290, "output_level": 6, "num_output_files": 1, "total_output_size": 10960908, "num_input_records": 7015, "num_output_records": 6580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707942395512, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707942396302, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.313108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.396409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.396417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.396421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.396424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:22.396427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.426 239460 INFO nova.compute.manager [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.494 239460 DEBUG nova.compute.manager [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.494 239460 DEBUG oslo_concurrency.lockutils [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.495 239460 DEBUG oslo_concurrency.lockutils [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.495 239460 DEBUG oslo_concurrency.lockutils [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.496 239460 DEBUG nova.compute.manager [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] No waiting events found dispatching network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.497 239460 WARNING nova.compute.manager [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received unexpected event network-vif-plugged-1a3a9194-8658-4eaa-940b-a73151c9d5cb for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.497 239460 DEBUG nova.compute.manager [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-changed-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.498 239460 DEBUG nova.compute.manager [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Refreshing instance network info cache due to event network-changed-266f0d91-fecd-4c22-a0ff-80edcaec94fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.498 239460 DEBUG oslo_concurrency.lockutils [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.499 239460 DEBUG oslo_concurrency.lockutils [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.499 239460 DEBUG nova.network.neutron [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Refreshing network info cache for port 266f0d91-fecd-4c22-a0ff-80edcaec94fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.504 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.647 239460 DEBUG nova.network.neutron [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:32:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754558745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.808 239460 DEBUG oslo_concurrency.processutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.815 239460 DEBUG nova.compute.provider_tree [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.839 239460 DEBUG nova.scheduler.client.report [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.858 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.860 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.879 239460 INFO nova.scheduler.client.report [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Deleted allocations for instance ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.935 239460 DEBUG oslo_concurrency.processutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.954 239460 DEBUG oslo_concurrency.lockutils [None req-30f87cdd-4b05-40b8-b2d2-9647cf9df7a8 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.983 239460 DEBUG nova.network.neutron [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.996 239460 DEBUG oslo_concurrency.lockutils [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-09e74043-3065-4c0b-bffa-930cc1a7f21f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.996 239460 DEBUG nova.compute.manager [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Received event network-vif-deleted-1a3a9194-8658-4eaa-940b-a73151c9d5cb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:22 np0005601226 nova_compute[239456]: 2026-01-29 17:32:22.997 239460 DEBUG nova.compute.manager [req-047b228b-62ff-409a-a92b-ed00550fea23 req-96c41ccb-b199-4280-a5a7-2c3270c8a083 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-vif-deleted-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.302 239460 DEBUG nova.compute.manager [req-b0346fb1-2151-4656-adc8-e0fdf1acb6a5 req-4db837ee-2b84-46b4-9fd8-c52ce0b4e240 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received event network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.303 239460 DEBUG oslo_concurrency.lockutils [req-b0346fb1-2151-4656-adc8-e0fdf1acb6a5 req-4db837ee-2b84-46b4-9fd8-c52ce0b4e240 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.303 239460 DEBUG oslo_concurrency.lockutils [req-b0346fb1-2151-4656-adc8-e0fdf1acb6a5 req-4db837ee-2b84-46b4-9fd8-c52ce0b4e240 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.304 239460 DEBUG oslo_concurrency.lockutils [req-b0346fb1-2151-4656-adc8-e0fdf1acb6a5 req-4db837ee-2b84-46b4-9fd8-c52ce0b4e240 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.304 239460 DEBUG nova.compute.manager [req-b0346fb1-2151-4656-adc8-e0fdf1acb6a5 req-4db837ee-2b84-46b4-9fd8-c52ce0b4e240 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] No waiting events found dispatching network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.305 239460 WARNING nova.compute.manager [req-b0346fb1-2151-4656-adc8-e0fdf1acb6a5 req-4db837ee-2b84-46b4-9fd8-c52ce0b4e240 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Received unexpected event network-vif-plugged-266f0d91-fecd-4c22-a0ff-80edcaec94fd for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:32:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:32:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975959583' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.421 239460 DEBUG oslo_concurrency.processutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.426 239460 DEBUG nova.compute.provider_tree [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.447 239460 DEBUG nova.scheduler.client.report [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.472 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.496 239460 INFO nova.scheduler.client.report [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Deleted allocations for instance 09e74043-3065-4c0b-bffa-930cc1a7f21f#033[00m
Jan 29 12:32:23 np0005601226 nova_compute[239456]: 2026-01-29 17:32:23.570 239460 DEBUG oslo_concurrency.lockutils [None req-783e2694-6b90-4c38-becd-6f1890ed62a2 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "09e74043-3065-4c0b-bffa-930cc1a7f21f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 370 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 592 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Jan 29 12:32:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292214683' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292214683' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Jan 29 12:32:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 365 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 608 KiB/s rd, 1.9 MiB/s wr, 92 op/s
Jan 29 12:32:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Jan 29 12:32:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Jan 29 12:32:26 np0005601226 nova_compute[239456]: 2026-01-29 17:32:26.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:26 np0005601226 nova_compute[239456]: 2026-01-29 17:32:26.759 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.405 239460 DEBUG nova.compute.manager [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-changed-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.405 239460 DEBUG nova.compute.manager [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Refreshing instance network info cache due to event network-changed-7c983110-cfa8-4df3-ac67-f5a430abcfc0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.405 239460 DEBUG oslo_concurrency.lockutils [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.406 239460 DEBUG oslo_concurrency.lockutils [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.406 239460 DEBUG nova.network.neutron [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Refreshing network info cache for port 7c983110-cfa8-4df3-ac67-f5a430abcfc0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.508 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.509 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.509 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.509 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.509 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.510 239460 INFO nova.compute.manager [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Terminating instance#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.511 239460 DEBUG nova.compute.manager [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:32:27 np0005601226 kernel: tap7c983110-cf (unregistering): left promiscuous mode
Jan 29 12:32:27 np0005601226 NetworkManager[49020]: <info>  [1769707947.5846] device (tap7c983110-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.586 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.590 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:27Z|00226|binding|INFO|Releasing lport 7c983110-cfa8-4df3-ac67-f5a430abcfc0 from this chassis (sb_readonly=0)
Jan 29 12:32:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:27Z|00227|binding|INFO|Setting lport 7c983110-cfa8-4df3-ac67-f5a430abcfc0 down in Southbound
Jan 29 12:32:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:27Z|00228|binding|INFO|Removing iface tap7c983110-cf ovn-installed in OVS
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.591 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.596 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.598 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:60:e3 10.100.0.11'], port_security=['fa:16:3e:6d:60:e3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '58d0f64a-66be-4f3d-ba39-68b90ddf8c4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '420f46ae230d4c529afe366a1b780921', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cfba344-fbfc-404d-872d-d297b528124f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43b4da70-6867-4e05-b172-1e52c878ce1d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=7c983110-cfa8-4df3-ac67-f5a430abcfc0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.600 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 7c983110-cfa8-4df3-ac67-f5a430abcfc0 in datapath 3c08c304-2b32-4b44-ac2b-279bb8b2403b unbound from our chassis#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.601 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c08c304-2b32-4b44-ac2b-279bb8b2403b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.602 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[76415f52-2dc3-47b4-9767-27b2272a0fe1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.602 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b namespace which is not needed anymore#033[00m
Jan 29 12:32:27 np0005601226 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 29 12:32:27 np0005601226 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 14.255s CPU time.
Jan 29 12:32:27 np0005601226 systemd-machined[207561]: Machine qemu-22-instance-00000016 terminated.
Jan 29 12:32:27 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[268484]: [NOTICE]   (268495) : haproxy version is 2.8.14-c23fe91
Jan 29 12:32:27 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[268484]: [NOTICE]   (268495) : path to executable is /usr/sbin/haproxy
Jan 29 12:32:27 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[268484]: [WARNING]  (268495) : Exiting Master process...
Jan 29 12:32:27 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[268484]: [ALERT]    (268495) : Current worker (268503) exited with code 143 (Terminated)
Jan 29 12:32:27 np0005601226 neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b[268484]: [WARNING]  (268495) : All workers exited. Exiting... (0)
Jan 29 12:32:27 np0005601226 systemd[1]: libpod-fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64.scope: Deactivated successfully.
Jan 29 12:32:27 np0005601226 conmon[268484]: conmon fee16ec35ca78519fff6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64.scope/container/memory.events
Jan 29 12:32:27 np0005601226 podman[270315]: 2026-01-29 17:32:27.717101428 +0000 UTC m=+0.045534958 container died fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.747 239460 INFO nova.virt.libvirt.driver [-] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Instance destroyed successfully.#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.747 239460 DEBUG nova.objects.instance [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lazy-loading 'resources' on Instance uuid 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:32:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64-userdata-shm.mount: Deactivated successfully.
Jan 29 12:32:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c6907652b9ab969a6cb6d496f0ee63fa4d1810df89a5d718a6364c4588dd8f8a-merged.mount: Deactivated successfully.
Jan 29 12:32:27 np0005601226 podman[270315]: 2026-01-29 17:32:27.759031667 +0000 UTC m=+0.087465187 container cleanup fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.761 239460 DEBUG nova.virt.libvirt.vif [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:30:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-240445960',display_name='tempest-TestVolumeBootPattern-server-240445960',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-240445960',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFfKI2YuGMM8acVCrFpefV0OhBXhmnc4Btak8xqfpJ+fgijtCxB67Cy872WiGoem7r2HhA8kwucFDoWkV8oHKl/YL0dPzqXRyF6oegJsH40c+wdyEi0ybB2qGD+38ijJg==',key_name='tempest-TestVolumeBootPattern-101947667',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:31:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='420f46ae230d4c529afe366a1b780921',ramdisk_id='',reservation_id='r-79907se8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1871389491',owner_user_name='tempest-TestVolumeBootPattern-1871389491-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:31:07Z,user_data=None,user_id='3901089a059c4bdb8d0497398873d2f1',uuid=58d0f64a-66be-4f3d-ba39-68b90ddf8c4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:32:27 np0005601226 systemd[1]: libpod-conmon-fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64.scope: Deactivated successfully.
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.763 239460 DEBUG nova.network.os_vif_util [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converting VIF {"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.765 239460 DEBUG nova.network.os_vif_util [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:60:e3,bridge_name='br-int',has_traffic_filtering=True,id=7c983110-cfa8-4df3-ac67-f5a430abcfc0,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c983110-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.765 239460 DEBUG os_vif [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:60:e3,bridge_name='br-int',has_traffic_filtering=True,id=7c983110-cfa8-4df3-ac67-f5a430abcfc0,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c983110-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.767 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.768 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c983110-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.769 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.770 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.772 239460 INFO os_vif [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:60:e3,bridge_name='br-int',has_traffic_filtering=True,id=7c983110-cfa8-4df3-ac67-f5a430abcfc0,network=Network(3c08c304-2b32-4b44-ac2b-279bb8b2403b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c983110-cf')#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.827 239460 DEBUG nova.compute.manager [req-db4994a0-ab94-4bb5-b5c8-1df893d8a4ab req-c706936b-e45c-4152-9925-9ac3eca37df0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-vif-unplugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.827 239460 DEBUG oslo_concurrency.lockutils [req-db4994a0-ab94-4bb5-b5c8-1df893d8a4ab req-c706936b-e45c-4152-9925-9ac3eca37df0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.828 239460 DEBUG oslo_concurrency.lockutils [req-db4994a0-ab94-4bb5-b5c8-1df893d8a4ab req-c706936b-e45c-4152-9925-9ac3eca37df0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.828 239460 DEBUG oslo_concurrency.lockutils [req-db4994a0-ab94-4bb5-b5c8-1df893d8a4ab req-c706936b-e45c-4152-9925-9ac3eca37df0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.828 239460 DEBUG nova.compute.manager [req-db4994a0-ab94-4bb5-b5c8-1df893d8a4ab req-c706936b-e45c-4152-9925-9ac3eca37df0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] No waiting events found dispatching network-vif-unplugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.829 239460 DEBUG nova.compute.manager [req-db4994a0-ab94-4bb5-b5c8-1df893d8a4ab req-c706936b-e45c-4152-9925-9ac3eca37df0 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-vif-unplugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:32:27 np0005601226 podman[270354]: 2026-01-29 17:32:27.830559932 +0000 UTC m=+0.057919211 container remove fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.833 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3013e5d3-9d95-4f6c-8ab3-04050c30035b]: (4, ('Thu Jan 29 05:32:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64)\nfee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64\nThu Jan 29 05:32:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b (fee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64)\nfee16ec35ca78519fff67d603ae347991f6658278a6749dc246f8adec87cca64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.835 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d38ceafb-28de-4cfe-b516-7523015a5ebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.836 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c08c304-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.837 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 kernel: tap3c08c304-20: left promiscuous mode
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.842 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 nova_compute[239456]: 2026-01-29 17:32:27.842 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.844 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[15f6bcc2-dc05-4a6e-b6d4-de6fcfcdf26f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.857 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2333947e-1f00-40e6-a2f3-8656971b9777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.858 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[37746c9b-e0e8-4370-8652-94ee4c06069d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.868 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[296d5465-7305-4f7b-a14a-a1b720bb9c10]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522034, 'reachable_time': 35242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270386, 'error': None, 'target': 'ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:27 np0005601226 systemd[1]: run-netns-ovnmeta\x2d3c08c304\x2d2b32\x2d4b44\x2dac2b\x2d279bb8b2403b.mount: Deactivated successfully.
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.870 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c08c304-2b32-4b44-ac2b-279bb8b2403b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:32:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:27.871 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab2a3e2-0bbf-4d3f-b8b3-5bc25d0505e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.006 239460 INFO nova.virt.libvirt.driver [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Deleting instance files /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_del#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.007 239460 INFO nova.virt.libvirt.driver [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Deletion of /var/lib/nova/instances/58d0f64a-66be-4f3d-ba39-68b90ddf8c4f_del complete#033[00m
Jan 29 12:32:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 365 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 380 KiB/s rd, 125 KiB/s wr, 74 op/s
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.059 239460 INFO nova.compute.manager [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.060 239460 DEBUG oslo.service.loopingcall [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.060 239460 DEBUG nova.compute.manager [-] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.060 239460 DEBUG nova.network.neutron [-] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.585 239460 DEBUG nova.network.neutron [-] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.601 239460 INFO nova.compute.manager [-] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Took 0.54 seconds to deallocate network for instance.#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.766 239460 DEBUG nova.network.neutron [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updated VIF entry in instance network info cache for port 7c983110-cfa8-4df3-ac67-f5a430abcfc0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.767 239460 DEBUG nova.network.neutron [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Updating instance_info_cache with network_info: [{"id": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "address": "fa:16:3e:6d:60:e3", "network": {"id": "3c08c304-2b32-4b44-ac2b-279bb8b2403b", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1886893199-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "420f46ae230d4c529afe366a1b780921", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c983110-cf", "ovs_interfaceid": "7c983110-cfa8-4df3-ac67-f5a430abcfc0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.773 239460 INFO nova.compute.manager [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Took 0.17 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.790 239460 DEBUG oslo_concurrency.lockutils [req-ed0fd07a-2d8a-4814-a34a-881cc5e02ad1 req-f508c1b8-1a37-4485-8ae9-87a5cdd0db4d 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.811 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.811 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:28 np0005601226 nova_compute[239456]: 2026-01-29 17:32:28.858 239460 DEBUG oslo_concurrency.processutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:32:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263589179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.367 239460 DEBUG oslo_concurrency.processutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.372 239460 DEBUG nova.compute.provider_tree [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.391 239460 DEBUG nova.scheduler.client.report [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.414 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.436 239460 INFO nova.scheduler.client.report [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Deleted allocations for instance 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.498 239460 DEBUG oslo_concurrency.lockutils [None req-b0803e9f-fdcf-469c-b436-0f01754a84f8 3901089a059c4bdb8d0497398873d2f1 420f46ae230d4c529afe366a1b780921 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.554 239460 DEBUG nova.compute.manager [req-4a6a3597-be51-4190-9e33-040367b82075 req-2754241f-4d4a-4165-98bf-7f908c62817a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-vif-deleted-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.554 239460 INFO nova.compute.manager [req-4a6a3597-be51-4190-9e33-040367b82075 req-2754241f-4d4a-4165-98bf-7f908c62817a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Neutron deleted interface 7c983110-cfa8-4df3-ac67-f5a430abcfc0; detaching it from the instance and deleting it from the info cache#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.554 239460 DEBUG nova.network.neutron [req-4a6a3597-be51-4190-9e33-040367b82075 req-2754241f-4d4a-4165-98bf-7f908c62817a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.557 239460 DEBUG nova.compute.manager [req-4a6a3597-be51-4190-9e33-040367b82075 req-2754241f-4d4a-4165-98bf-7f908c62817a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Detach interface failed, port_id=7c983110-cfa8-4df3-ac67-f5a430abcfc0, reason: Instance 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.923 239460 DEBUG nova.compute.manager [req-7d212ba0-9cdd-4990-a522-3f077caf1ffc req-0587e82a-3cc8-432e-88fc-745b9a921e05 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received event network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.923 239460 DEBUG oslo_concurrency.lockutils [req-7d212ba0-9cdd-4990-a522-3f077caf1ffc req-0587e82a-3cc8-432e-88fc-745b9a921e05 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.923 239460 DEBUG oslo_concurrency.lockutils [req-7d212ba0-9cdd-4990-a522-3f077caf1ffc req-0587e82a-3cc8-432e-88fc-745b9a921e05 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.923 239460 DEBUG oslo_concurrency.lockutils [req-7d212ba0-9cdd-4990-a522-3f077caf1ffc req-0587e82a-3cc8-432e-88fc-745b9a921e05 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "58d0f64a-66be-4f3d-ba39-68b90ddf8c4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.924 239460 DEBUG nova.compute.manager [req-7d212ba0-9cdd-4990-a522-3f077caf1ffc req-0587e82a-3cc8-432e-88fc-745b9a921e05 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] No waiting events found dispatching network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:32:29 np0005601226 nova_compute[239456]: 2026-01-29 17:32:29.924 239460 WARNING nova.compute.manager [req-7d212ba0-9cdd-4990-a522-3f077caf1ffc req-0587e82a-3cc8-432e-88fc-745b9a921e05 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Received unexpected event network-vif-plugged-7c983110-cfa8-4df3-ac67-f5a430abcfc0 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:32:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 351 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 29 KiB/s wr, 79 op/s
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.378 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.378 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.396 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.450 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.451 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.458 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.459 239460 INFO nova.compute.claims [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:32:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1428108669' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1428108669' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:30 np0005601226 nova_compute[239456]: 2026-01-29 17:32:30.559 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:30 np0005601226 podman[270430]: 2026-01-29 17:32:30.886804918 +0000 UTC m=+0.055977998 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Jan 29 12:32:30 np0005601226 podman[270431]: 2026-01-29 17:32:30.939494686 +0000 UTC m=+0.105572113 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 29 12:32:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:32:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815634668' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.065 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.070 239460 DEBUG nova.compute.provider_tree [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.097 239460 DEBUG nova.scheduler.client.report [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.117 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.117 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.164 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.165 239460 DEBUG nova.network.neutron [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.191 239460 INFO nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.233 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.281 239460 INFO nova.virt.block_device [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Booting with volume 4000fad4-b5f6-4912-bea5-f20dff3f5ac9 at /dev/vda#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.437 239460 DEBUG os_brick.utils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.438 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.449 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.449 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[202d312d-ab14-49d3-914c-6319b5d38fe2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.451 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.458 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.459 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7e62dc-010b-43fe-9fcd-2d80341e4e0d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.461 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.467 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.468 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[6653c300-f317-45c3-8c9f-afefadbf6d1e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.470 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[468e99bc-1a17-4d45-b191-ae7d7eb2ff85]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.471 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.490 239460 DEBUG nova.policy [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4f278bc1afe946ca991a0203a74c5a7f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.497 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.500 239460 DEBUG os_brick.initiator.connectors.lightos [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.502 239460 DEBUG os_brick.initiator.connectors.lightos [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.503 239460 DEBUG os_brick.initiator.connectors.lightos [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.503 239460 DEBUG os_brick.utils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.504 239460 DEBUG nova.virt.block_device [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updating existing volume attachment record: 3accd562-14be-4197-bdd5-6274c7c53748 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.623 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.624 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:32:31 np0005601226 nova_compute[239456]: 2026-01-29 17:32:31.760 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206844871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4206844871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 351 MiB data, 624 MiB used, 59 GiB / 60 GiB avail; 227 KiB/s rd, 28 KiB/s wr, 76 op/s
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.156 239460 DEBUG nova.network.neutron [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Successfully created port: e3ded0c8-e7b9-4534-8420-a68a252cbfce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:32:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:32:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3476492862' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:32:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Jan 29 12:32:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Jan 29 12:32:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.635 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.636 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.637 239460 INFO nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Creating image(s)#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.637 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.638 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Ensure instance console log exists: /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.638 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.638 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.638 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.771 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:32 np0005601226 nova_compute[239456]: 2026-01-29 17:32:32.992 239460 DEBUG nova.network.neutron [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Successfully updated port: e3ded0c8-e7b9-4534-8420-a68a252cbfce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.008 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.009 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquired lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.009 239460 DEBUG nova.network.neutron [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.103 239460 DEBUG nova.compute.manager [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-changed-e3ded0c8-e7b9-4534-8420-a68a252cbfce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.103 239460 DEBUG nova.compute.manager [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Refreshing instance network info cache due to event network-changed-e3ded0c8-e7b9-4534-8420-a68a252cbfce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.104 239460 DEBUG oslo_concurrency.lockutils [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.190 239460 DEBUG nova.network.neutron [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.944 239460 DEBUG nova.network.neutron [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updating instance_info_cache with network_info: [{"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.998 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Releasing lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:33 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.999 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Instance network_info: |[{"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:33.999 239460 DEBUG oslo_concurrency.lockutils [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.000 239460 DEBUG nova.network.neutron [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Refreshing network info cache for port e3ded0c8-e7b9-4534-8420-a68a252cbfce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.006 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Start _get_guest_xml network_info=[{"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '3accd562-14be-4197-bdd5-6274c7c53748', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a63c1430-ea41-4d52-8ba3-4122d88a6621', 'attached_at': '', 'detached_at': '', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'serial': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:32:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 332 MiB data, 611 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.5 KiB/s wr, 32 op/s
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.011 239460 WARNING nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.018 239460 DEBUG nova.virt.libvirt.host [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.019 239460 DEBUG nova.virt.libvirt.host [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.022 239460 DEBUG nova.virt.libvirt.host [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.023 239460 DEBUG nova.virt.libvirt.host [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.024 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.024 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.025 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.025 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.026 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.026 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.027 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.027 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.027 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.028 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.028 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.029 239460 DEBUG nova.virt.hardware [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.061 239460 DEBUG nova.storage.rbd_utils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image a63c1430-ea41-4d52-8ba3-4122d88a6621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.066 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:32:34 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170043468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.619 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.722 239460 DEBUG os_brick.encryptors [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Using volume encryption metadata '{'encryption_key_id': '9091ace7-7105-4688-a876-f06841abd9d8', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a63c1430-ea41-4d52-8ba3-4122d88a6621', 'attached_at': '', 'detached_at': '', 'volume_id': '4000fad4-b5f6-4912-bea5-f20dff3f5ac9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.726 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.746 239460 DEBUG barbicanclient.v1.secrets [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/9091ace7-7105-4688-a876-f06841abd9d8 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.746 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.775 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.776 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.800 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.801 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.831 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.832 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.858 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.859 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.878 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.879 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.901 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.902 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.940 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.940 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.972 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:34 np0005601226 nova_compute[239456]: 2026-01-29 17:32:34.972 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.001 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.002 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.021 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.021 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.048 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.049 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.071 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.071 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.099 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.100 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.122 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.123 239460 INFO barbicanclient.base [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Calculated Secrets uuid ref: secrets/9091ace7-7105-4688-a876-f06841abd9d8#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.157 239460 DEBUG barbicanclient.client [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.158 239460 DEBUG nova.virt.libvirt.host [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <volume>4000fad4-b5f6-4912-bea5-f20dff3f5ac9</volume>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:32:35 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:32:35 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.184 239460 DEBUG nova.virt.libvirt.vif [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1713674902',display_name='tempest-TransferEncryptedVolumeTest-server-1713674902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1713674902',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuw1UleBgXrkrvixGGBn21sDEJH6+FrkgFvq6jv3D3khmeyc7tU6zH/hmJ8BmjXmJToJI+73AcA0H8QCIrilSaG34LfS65uhiBlMWUY7wThjQ0H0WSLw5MFEF4DjDh1dA==',key_name='tempest-TransferEncryptedVolumeTest-895765981',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-rileg2pv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:32:31Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=a63c1430-ea41-4d52-8ba3-4122d88a6621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.184 239460 DEBUG nova.network.os_vif_util [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.185 239460 DEBUG nova.network.os_vif_util [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:cb:1d,bridge_name='br-int',has_traffic_filtering=True,id=e3ded0c8-e7b9-4534-8420-a68a252cbfce,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3ded0c8-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.187 239460 DEBUG nova.objects.instance [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'pci_devices' on Instance uuid a63c1430-ea41-4d52-8ba3-4122d88a6621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.198 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707940.1968985, ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.198 239460 INFO nova.compute.manager [-] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.202 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <uuid>a63c1430-ea41-4d52-8ba3-4122d88a6621</uuid>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <name>instance-00000019</name>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <nova:name>tempest-TransferEncryptedVolumeTest-server-1713674902</nova:name>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:32:34</nova:creationTime>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:user uuid="4f278bc1afe946ca991a0203a74c5a7f">tempest-TransferEncryptedVolumeTest-1262552887-project-member</nova:user>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:project uuid="c74297072cc041019fc7ff4bff1a0f08">tempest-TransferEncryptedVolumeTest-1262552887</nova:project>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <nova:port uuid="e3ded0c8-e7b9-4534-8420-a68a252cbfce">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <entry name="serial">a63c1430-ea41-4d52-8ba3-4122d88a6621</entry>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <entry name="uuid">a63c1430-ea41-4d52-8ba3-4122d88a6621</entry>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/a63c1430-ea41-4d52-8ba3-4122d88a6621_disk.config">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-4000fad4-b5f6-4912-bea5-f20dff3f5ac9">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <serial>4000fad4-b5f6-4912-bea5-f20dff3f5ac9</serial>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <encryption format="luks">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:        <secret type="passphrase" uuid="9e3bcb34-6563-40fb-9e72-d9f8f81ee70b"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      </encryption>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:05:cb:1d"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <target dev="tape3ded0c8-e7"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/console.log" append="off"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:32:35 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:32:35 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:32:35 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:32:35 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.203 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Preparing to wait for external event network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.203 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.204 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.204 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.205 239460 DEBUG nova.virt.libvirt.vif [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1713674902',display_name='tempest-TransferEncryptedVolumeTest-server-1713674902',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1713674902',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuw1UleBgXrkrvixGGBn21sDEJH6+FrkgFvq6jv3D3khmeyc7tU6zH/hmJ8BmjXmJToJI+73AcA0H8QCIrilSaG34LfS65uhiBlMWUY7wThjQ0H0WSLw5MFEF4DjDh1dA==',key_name='tempest-TransferEncryptedVolumeTest-895765981',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-rileg2pv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:32:31Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=a63c1430-ea41-4d52-8ba3-4122d88a6621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.205 239460 DEBUG nova.network.os_vif_util [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.206 239460 DEBUG nova.network.os_vif_util [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:cb:1d,bridge_name='br-int',has_traffic_filtering=True,id=e3ded0c8-e7b9-4534-8420-a68a252cbfce,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3ded0c8-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.206 239460 DEBUG os_vif [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:cb:1d,bridge_name='br-int',has_traffic_filtering=True,id=e3ded0c8-e7b9-4534-8420-a68a252cbfce,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3ded0c8-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.207 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.208 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.208 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.210 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.211 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3ded0c8-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.211 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape3ded0c8-e7, col_values=(('external_ids', {'iface-id': 'e3ded0c8-e7b9-4534-8420-a68a252cbfce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:cb:1d', 'vm-uuid': 'a63c1430-ea41-4d52-8ba3-4122d88a6621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.219 239460 DEBUG nova.compute.manager [None req-0f4db80f-0c8e-4b59-82a6-2a33d3f8b055 - - - - - -] [instance: ef5e6eb6-164e-4b2f-93b1-eb72a0f79e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.256 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:35 np0005601226 NetworkManager[49020]: <info>  [1769707955.2584] manager: (tape3ded0c8-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.260 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.262 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.263 239460 INFO os_vif [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:cb:1d,bridge_name='br-int',has_traffic_filtering=True,id=e3ded0c8-e7b9-4534-8420-a68a252cbfce,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3ded0c8-e7')#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.321 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.321 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.322 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] No VIF found with MAC fa:16:3e:05:cb:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.322 239460 INFO nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Using config drive#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.349 239460 DEBUG nova.storage.rbd_utils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image a63c1430-ea41-4d52-8ba3-4122d88a6621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.485 239460 DEBUG nova.network.neutron [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updated VIF entry in instance network info cache for port e3ded0c8-e7b9-4534-8420-a68a252cbfce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.486 239460 DEBUG nova.network.neutron [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updating instance_info_cache with network_info: [{"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.501 239460 DEBUG oslo_concurrency.lockutils [req-cbca61e2-4523-4b0e-8279-d69a9b2f23db req-03b0adca-8648-4891-8589-ed40008ca367 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:35 np0005601226 nova_compute[239456]: 2026-01-29 17:32:35.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.9 KiB/s wr, 45 op/s
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.144 239460 INFO nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Creating config drive at /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/disk.config#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.150 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu45kk1lv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.166 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707941.1471689, 09e74043-3065-4c0b-bffa-930cc1a7f21f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.167 239460 INFO nova.compute.manager [-] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.193 239460 DEBUG nova.compute.manager [None req-055b9348-3f08-4af5-9ef3-04c271583931 - - - - - -] [instance: 09e74043-3065-4c0b-bffa-930cc1a7f21f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.275 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu45kk1lv" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.355 239460 DEBUG nova.storage.rbd_utils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] rbd image a63c1430-ea41-4d52-8ba3-4122d88a6621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.358 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/disk.config a63c1430-ea41-4d52-8ba3-4122d88a6621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.374 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.390 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.484 239460 DEBUG oslo_concurrency.processutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/disk.config a63c1430-ea41-4d52-8ba3-4122d88a6621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.485 239460 INFO nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Deleting local config drive /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621/disk.config because it was imported into RBD.#033[00m
Jan 29 12:32:36 np0005601226 NetworkManager[49020]: <info>  [1769707956.5229] manager: (tape3ded0c8-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Jan 29 12:32:36 np0005601226 kernel: tape3ded0c8-e7: entered promiscuous mode
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.524 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:36Z|00229|binding|INFO|Claiming lport e3ded0c8-e7b9-4534-8420-a68a252cbfce for this chassis.
Jan 29 12:32:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:36Z|00230|binding|INFO|e3ded0c8-e7b9-4534-8420-a68a252cbfce: Claiming fa:16:3e:05:cb:1d 10.100.0.12
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.531 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.537 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:cb:1d 10.100.0.12'], port_security=['fa:16:3e:05:cb:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a63c1430-ea41-4d52-8ba3-4122d88a6621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b52d9814-61c4-42dd-84af-517b84e36907', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=e3ded0c8-e7b9-4534-8420-a68a252cbfce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.539 155625 INFO neutron.agent.ovn.metadata.agent [-] Port e3ded0c8-e7b9-4534-8420-a68a252cbfce in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 bound to our chassis#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.541 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 25cf1715-f178-4f65-be7c-cf203c28f072#033[00m
Jan 29 12:32:36 np0005601226 systemd-machined[207561]: New machine qemu-25-instance-00000019.
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.548 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b8beab63-4152-4cde-9134-d5a0bd4a4195]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.549 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap25cf1715-f1 in ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.551 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap25cf1715-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.551 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e12edd65-245b-41fc-ad1e-46a3a8a8a24a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.552 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[bf2ac58d-579c-4ff9-9080-fe46a5f5b01e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.560 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[423b90c0-71df-437d-aa3b-b4a171c0d0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:36Z|00231|binding|INFO|Setting lport e3ded0c8-e7b9-4534-8420-a68a252cbfce ovn-installed in OVS
Jan 29 12:32:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:36Z|00232|binding|INFO|Setting lport e3ded0c8-e7b9-4534-8420-a68a252cbfce up in Southbound
Jan 29 12:32:36 np0005601226 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.563 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 systemd-udevd[270603]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.574 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[681714d9-ea6a-47a8-911a-e689ec9c95a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 NetworkManager[49020]: <info>  [1769707956.5843] device (tape3ded0c8-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:32:36 np0005601226 NetworkManager[49020]: <info>  [1769707956.5849] device (tape3ded0c8-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.597 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[a76568f6-223f-4777-8056-c8d05627a6fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 NetworkManager[49020]: <info>  [1769707956.6011] manager: (tap25cf1715-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/124)
Jan 29 12:32:36 np0005601226 systemd-udevd[270607]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.600 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3087c0c7-3e69-44d6-81a8-87d098c2b7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.623 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[71aecc0e-755b-42d4-b11c-05e9cb98ce27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.630 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[15d25e50-f09b-4323-9c6a-b7590f46c4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.628 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.629 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.629 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.629 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.630 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:36 np0005601226 NetworkManager[49020]: <info>  [1769707956.6459] device (tap25cf1715-f0): carrier: link connected
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.649 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[3716fd07-c7b7-47b0-99a3-52d7c3088d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.662 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3a74c175-1ed4-437c-8820-22f171bcf049]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531014, 'reachable_time': 24525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270634, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.672 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[232efeff-6cf1-43b3-942e-7f0d95daf120]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:50ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531014, 'tstamp': 531014}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270635, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.689 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a5191b9c-858a-4ee4-b851-3d3e1d314c9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap25cf1715-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:50:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531014, 'reachable_time': 24525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270636, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.714 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[535138ab-96dd-4e4d-b4c3-4420a2a4014e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.760 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.765 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb08ba8-743b-4ed4-a0cd-11d66e4ac9a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.767 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.767 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.768 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25cf1715-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.769 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 NetworkManager[49020]: <info>  [1769707956.7698] manager: (tap25cf1715-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Jan 29 12:32:36 np0005601226 kernel: tap25cf1715-f0: entered promiscuous mode
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.770 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap25cf1715-f0, col_values=(('external_ids', {'iface-id': '82a91bf5-9093-4cbd-bfe4-f5d4b5400077'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:32:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:36Z|00233|binding|INFO|Releasing lport 82a91bf5-9093-4cbd-bfe4-f5d4b5400077 from this chassis (sb_readonly=0)
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.771 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 nova_compute[239456]: 2026-01-29 17:32:36.775 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.776 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.776 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[abd4b9c1-cb73-4ff5-a9d9-248fc8f1da29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.777 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/25cf1715-f178-4f65-be7c-cf203c28f072.pid.haproxy
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 25cf1715-f178-4f65-be7c-cf203c28f072
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:32:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:36.778 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'env', 'PROCESS_TAG=haproxy-25cf1715-f178-4f65-be7c-cf203c28f072', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/25cf1715-f178-4f65-be7c-cf203c28f072.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:32:37 np0005601226 podman[270720]: 2026-01-29 17:32:37.127824081 +0000 UTC m=+0.068438083 container create cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.139 239460 DEBUG nova.compute.manager [req-a720713f-4774-463c-ac8e-d0137197b319 req-8baf5bb9-5944-409d-90e2-91aa04aefbe2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.140 239460 DEBUG oslo_concurrency.lockutils [req-a720713f-4774-463c-ac8e-d0137197b319 req-8baf5bb9-5944-409d-90e2-91aa04aefbe2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.140 239460 DEBUG oslo_concurrency.lockutils [req-a720713f-4774-463c-ac8e-d0137197b319 req-8baf5bb9-5944-409d-90e2-91aa04aefbe2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.141 239460 DEBUG oslo_concurrency.lockutils [req-a720713f-4774-463c-ac8e-d0137197b319 req-8baf5bb9-5944-409d-90e2-91aa04aefbe2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.141 239460 DEBUG nova.compute.manager [req-a720713f-4774-463c-ac8e-d0137197b319 req-8baf5bb9-5944-409d-90e2-91aa04aefbe2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Processing event network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:32:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:32:37 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717901389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.166 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:37 np0005601226 podman[270720]: 2026-01-29 17:32:37.080512827 +0000 UTC m=+0.021126839 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:32:37 np0005601226 systemd[1]: Started libpod-conmon-cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525.scope.
Jan 29 12:32:37 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:32:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c2e97b2448cdfcde66996d64f8eb3954020cdfe21b672eef794b9ab25ecb23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:32:37 np0005601226 podman[270720]: 2026-01-29 17:32:37.218440811 +0000 UTC m=+0.159054813 container init cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:32:37 np0005601226 podman[270720]: 2026-01-29 17:32:37.226455657 +0000 UTC m=+0.167069659 container start cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.230 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.231 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:32:37 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[270738]: [NOTICE]   (270742) : New worker (270744) forked
Jan 29 12:32:37 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[270738]: [NOTICE]   (270742) : Loading success.
Jan 29 12:32:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.429 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.430 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4266MB free_disk=59.988162863999605GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.430 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.431 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.504 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance a63c1430-ea41-4d52-8ba3-4122d88a6621 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.504 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.505 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:32:37 np0005601226 nova_compute[239456]: 2026-01-29 17:32:37.551 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:32:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.9 KiB/s wr, 45 op/s
Jan 29 12:32:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:32:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1149769756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:32:38 np0005601226 nova_compute[239456]: 2026-01-29 17:32:38.079 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:32:38 np0005601226 nova_compute[239456]: 2026-01-29 17:32:38.089 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:32:38 np0005601226 nova_compute[239456]: 2026-01-29 17:32:38.108 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:32:38 np0005601226 nova_compute[239456]: 2026-01-29 17:32:38.135 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:32:38 np0005601226 nova_compute[239456]: 2026-01-29 17:32:38.137 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.137 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.188 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.190 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707959.189611, a63c1430-ea41-4d52-8ba3-4122d88a6621 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.191 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] VM Started (Lifecycle Event)#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.195 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.201 239460 INFO nova.virt.libvirt.driver [-] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Instance spawned successfully.#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.201 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.227 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.236 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.237 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.238 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.239 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.240 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.240 239460 DEBUG nova.virt.libvirt.driver [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.249 239460 DEBUG nova.compute.manager [req-8dcec0c5-859c-447c-90cf-434855fb5bae req-3e0cb091-dfd3-4a63-8667-31b3170d6555 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.249 239460 DEBUG oslo_concurrency.lockutils [req-8dcec0c5-859c-447c-90cf-434855fb5bae req-3e0cb091-dfd3-4a63-8667-31b3170d6555 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.250 239460 DEBUG oslo_concurrency.lockutils [req-8dcec0c5-859c-447c-90cf-434855fb5bae req-3e0cb091-dfd3-4a63-8667-31b3170d6555 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.251 239460 DEBUG oslo_concurrency.lockutils [req-8dcec0c5-859c-447c-90cf-434855fb5bae req-3e0cb091-dfd3-4a63-8667-31b3170d6555 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.251 239460 DEBUG nova.compute.manager [req-8dcec0c5-859c-447c-90cf-434855fb5bae req-3e0cb091-dfd3-4a63-8667-31b3170d6555 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] No waiting events found dispatching network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.251 239460 WARNING nova.compute.manager [req-8dcec0c5-859c-447c-90cf-434855fb5bae req-3e0cb091-dfd3-4a63-8667-31b3170d6555 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received unexpected event network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce for instance with vm_state building and task_state spawning.#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.253 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.283 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.284 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707959.189748, a63c1430-ea41-4d52-8ba3-4122d88a6621 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.285 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.320 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.326 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769707959.1947832, a63c1430-ea41-4d52-8ba3-4122d88a6621 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.326 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.330 239460 INFO nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Took 6.69 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.331 239460 DEBUG nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.345 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.350 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.377 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.393 239460 INFO nova.compute.manager [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Took 8.96 seconds to build instance.#033[00m
Jan 29 12:32:39 np0005601226 nova_compute[239456]: 2026-01-29 17:32:39.414 239460 DEBUG oslo_concurrency.lockutils [None req-1bbd0840-e490-49c3-88b4-dd759d74daed 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.604314) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707959604379, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 445, "num_deletes": 252, "total_data_size": 321568, "memory_usage": 330904, "flush_reason": "Manual Compaction"}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707959609000, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 316984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33753, "largest_seqno": 34197, "table_properties": {"data_size": 314405, "index_size": 615, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6571, "raw_average_key_size": 19, "raw_value_size": 309150, "raw_average_value_size": 906, "num_data_blocks": 27, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707942, "oldest_key_time": 1769707942, "file_creation_time": 1769707959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4751 microseconds, and 2402 cpu microseconds.
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.609061) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 316984 bytes OK
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.609087) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.611719) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.611746) EVENT_LOG_v1 {"time_micros": 1769707959611737, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.611777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 318825, prev total WAL file size 318825, number of live WAL files 2.
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.612578) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(309KB)], [68(10MB)]
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707959612672, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11277892, "oldest_snapshot_seqno": -1}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6406 keys, 9510653 bytes, temperature: kUnknown
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707959675351, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9510653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9464395, "index_size": 29124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 161665, "raw_average_key_size": 25, "raw_value_size": 9345882, "raw_average_value_size": 1458, "num_data_blocks": 1163, "num_entries": 6406, "num_filter_entries": 6406, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769707959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.675725) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9510653 bytes
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.678186) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.6 rd, 151.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(65.6) write-amplify(30.0) OK, records in: 6921, records dropped: 515 output_compression: NoCompression
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.678238) EVENT_LOG_v1 {"time_micros": 1769707959678224, "job": 38, "event": "compaction_finished", "compaction_time_micros": 62781, "compaction_time_cpu_micros": 20961, "output_level": 6, "num_output_files": 1, "total_output_size": 9510653, "num_input_records": 6921, "num_output_records": 6406, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707959678436, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769707959679889, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.612403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.679965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.679974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.679976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.679977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:39 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:32:39.679979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.4 KiB/s wr, 46 op/s
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.290 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:40.293 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:32:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:40.294 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:32:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:32:40.295 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:32:40
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'images', 'vms']
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:32:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.972 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.973 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.973 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:32:40 np0005601226 nova_compute[239456]: 2026-01-29 17:32:40.974 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid a63c1430-ea41-4d52-8ba3-4122d88a6621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:32:41 np0005601226 nova_compute[239456]: 2026-01-29 17:32:41.763 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 63 op/s
Jan 29 12:32:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:42 np0005601226 nova_compute[239456]: 2026-01-29 17:32:42.467 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updating instance_info_cache with network_info: [{"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:42 np0005601226 nova_compute[239456]: 2026-01-29 17:32:42.525 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:42 np0005601226 nova_compute[239456]: 2026-01-29 17:32:42.526 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:32:42 np0005601226 nova_compute[239456]: 2026-01-29 17:32:42.747 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707947.7458837, 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:32:42 np0005601226 nova_compute[239456]: 2026-01-29 17:32:42.747 239460 INFO nova.compute.manager [-] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:32:42 np0005601226 nova_compute[239456]: 2026-01-29 17:32:42.816 239460 DEBUG nova.compute.manager [None req-17f93271-0f91-4d48-8b05-06ea5bc15b77 - - - - - -] [instance: 58d0f64a-66be-4f3d-ba39-68b90ddf8c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:32:43 np0005601226 nova_compute[239456]: 2026-01-29 17:32:43.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:43 np0005601226 NetworkManager[49020]: <info>  [1769707963.7308] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Jan 29 12:32:43 np0005601226 nova_compute[239456]: 2026-01-29 17:32:43.730 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:43 np0005601226 NetworkManager[49020]: <info>  [1769707963.7319] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Jan 29 12:32:43 np0005601226 nova_compute[239456]: 2026-01-29 17:32:43.732 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:43 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:43Z|00234|binding|INFO|Releasing lport 82a91bf5-9093-4cbd-bfe4-f5d4b5400077 from this chassis (sb_readonly=0)
Jan 29 12:32:43 np0005601226 nova_compute[239456]: 2026-01-29 17:32:43.744 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 29 12:32:44 np0005601226 nova_compute[239456]: 2026-01-29 17:32:44.307 239460 DEBUG nova.compute.manager [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-changed-e3ded0c8-e7b9-4534-8420-a68a252cbfce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:32:44 np0005601226 nova_compute[239456]: 2026-01-29 17:32:44.307 239460 DEBUG nova.compute.manager [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Refreshing instance network info cache due to event network-changed-e3ded0c8-e7b9-4534-8420-a68a252cbfce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:32:44 np0005601226 nova_compute[239456]: 2026-01-29 17:32:44.308 239460 DEBUG oslo_concurrency.lockutils [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:32:44 np0005601226 nova_compute[239456]: 2026-01-29 17:32:44.308 239460 DEBUG oslo_concurrency.lockutils [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:32:44 np0005601226 nova_compute[239456]: 2026-01-29 17:32:44.309 239460 DEBUG nova.network.neutron [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Refreshing network info cache for port e3ded0c8-e7b9-4534-8420-a68a252cbfce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:32:45 np0005601226 nova_compute[239456]: 2026-01-29 17:32:45.293 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:45 np0005601226 nova_compute[239456]: 2026-01-29 17:32:45.634 239460 DEBUG nova.network.neutron [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updated VIF entry in instance network info cache for port e3ded0c8-e7b9-4534-8420-a68a252cbfce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:32:45 np0005601226 nova_compute[239456]: 2026-01-29 17:32:45.635 239460 DEBUG nova.network.neutron [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updating instance_info_cache with network_info: [{"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:32:45 np0005601226 nova_compute[239456]: 2026-01-29 17:32:45.655 239460 DEBUG oslo_concurrency.lockutils [req-4da9c790-dfda-486c-a94c-51fbe7e2273d req-7cce5e01-7bba-4603-a448-0ec9f7a3675b 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-a63c1430-ea41-4d52-8ba3-4122d88a6621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:32:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 90 op/s
Jan 29 12:32:46 np0005601226 nova_compute[239456]: 2026-01-29 17:32:46.765 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Jan 29 12:32:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Jan 29 12:32:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Jan 29 12:32:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 89 op/s
Jan 29 12:32:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 82 op/s
Jan 29 12:32:50 np0005601226 nova_compute[239456]: 2026-01-29 17:32:50.342 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236457121' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236457121' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:51Z|00060|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Jan 29 12:32:51 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:51Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:05:cb:1d 10.100.0.12
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:32:51 np0005601226 nova_compute[239456]: 2026-01-29 17:32:51.606 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.389513173109232e-06 of space, bias 1.0, pg target 0.0013168539519327696 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002903829757656647 of space, bias 1.0, pg target 0.8711489272969941 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.543272376751352e-06 of space, bias 1.0, pg target 0.0010629817130254056 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006670324940727102 of space, bias 1.0, pg target 0.20010974822181304 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.445648422882086e-06 of space, bias 4.0, pg target 0.0017347781074585032 quantized to 16 (current 16)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:32:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:32:51 np0005601226 nova_compute[239456]: 2026-01-29 17:32:51.767 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2372409197' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:52 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2372409197' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 KiB/s wr, 88 op/s
Jan 29 12:32:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:52 np0005601226 nova_compute[239456]: 2026-01-29 17:32:52.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:52 np0005601226 nova_compute[239456]: 2026-01-29 17:32:52.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 29 12:32:52 np0005601226 nova_compute[239456]: 2026-01-29 17:32:52.687 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 29 12:32:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466904500' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466904500' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 963 KiB/s rd, 2.7 KiB/s wr, 88 op/s
Jan 29 12:32:54 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:54Z|00062|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Jan 29 12:32:54 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:54Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:05:cb:1d 10.100.0.12
Jan 29 12:32:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3943040106' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:54 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3943040106' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:55 np0005601226 nova_compute[239456]: 2026-01-29 17:32:55.345 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:55 np0005601226 nova_compute[239456]: 2026-01-29 17:32:55.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:32:55 np0005601226 nova_compute[239456]: 2026-01-29 17:32:55.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 29 12:32:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 13 KiB/s wr, 118 op/s
Jan 29 12:32:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:56Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:cb:1d 10.100.0.12
Jan 29 12:32:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:32:56Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:cb:1d 10.100.0.12
Jan 29 12:32:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:32:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3506030667' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:32:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:32:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3506030667' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:32:56 np0005601226 nova_compute[239456]: 2026-01-29 17:32:56.769 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:32:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:32:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 744 KiB/s rd, 12 KiB/s wr, 114 op/s
Jan 29 12:32:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Jan 29 12:32:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Jan 29 12:32:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Jan 29 12:33:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 781 KiB/s rd, 29 KiB/s wr, 133 op/s
Jan 29 12:33:00 np0005601226 nova_compute[239456]: 2026-01-29 17:33:00.401 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811152251' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811152251' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:01 np0005601226 nova_compute[239456]: 2026-01-29 17:33:01.734 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:01 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:01.734 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:33:01 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:01.736 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:33:01 np0005601226 nova_compute[239456]: 2026-01-29 17:33:01.772 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:01 np0005601226 podman[270782]: 2026-01-29 17:33:01.908002127 +0000 UTC m=+0.069603684 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 29 12:33:01 np0005601226 podman[270783]: 2026-01-29 17:33:01.950708017 +0000 UTC m=+0.112214451 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 29 12:33:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 28 KiB/s wr, 132 op/s
Jan 29 12:33:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:02 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:02.739 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:33:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 28 KiB/s wr, 110 op/s
Jan 29 12:33:05 np0005601226 nova_compute[239456]: 2026-01-29 17:33:05.404 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 22 KiB/s wr, 57 op/s
Jan 29 12:33:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Jan 29 12:33:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Jan 29 12:33:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Jan 29 12:33:06 np0005601226 nova_compute[239456]: 2026-01-29 17:33:06.774 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Jan 29 12:33:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 22 KiB/s wr, 58 op/s
Jan 29 12:33:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 15 KiB/s wr, 48 op/s
Jan 29 12:33:10 np0005601226 nova_compute[239456]: 2026-01-29 17:33:10.449 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:33:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:33:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:33:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:33:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:33:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:33:11 np0005601226 nova_compute[239456]: 2026-01-29 17:33:11.775 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 14 KiB/s wr, 10 op/s
Jan 29 12:33:12 np0005601226 ceph-mds[96568]: mds.beacon.cephfs.compute-0.cflubi missed beacon ack from the monitors
Jan 29 12:33:13 np0005601226 ovn_controller[145556]: 2026-01-29T17:33:13Z|00235|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 29 12:33:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 14 KiB/s wr, 10 op/s
Jan 29 12:33:15 np0005601226 nova_compute[239456]: 2026-01-29 17:33:15.453 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 9.7 KiB/s wr, 8 op/s
Jan 29 12:33:16 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 9.491985321s
Jan 29 12:33:16 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 9.491985321s
Jan 29 12:33:16 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.492147446s, txc = 0x55f5170e0f00, txc bytes = 1501, txc ios = 1, txc cost = 671501, txc onodes = 1, DB updates = 4, DB bytes = 1337, cost max = 110262598 on 2026-01-29T17:26:12.361871+0000, txc max = 104 on 2026-01-29T16:51:50.108713+0000
Jan 29 12:33:16 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.135297775s, txc = 0x55f519465500, txc bytes = 34895, txc ios = 1, txc cost = 704895, txc onodes = 1, DB updates = 6, DB bytes = 35233, cost max = 110262598 on 2026-01-29T17:26:12.361871+0000, txc max = 104 on 2026-01-29T16:51:50.108713+0000
Jan 29 12:33:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Jan 29 12:33:16 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Jan 29 12:33:16 np0005601226 podman[270919]: 2026-01-29 17:33:16.483601609 +0000 UTC m=+0.184054555 container exec 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:33:16 np0005601226 podman[270919]: 2026-01-29 17:33:16.575431382 +0000 UTC m=+0.275884328 container exec_died 79fb58d438a044b744a5a22031968fb38c458a504a88d1d9ce361ec7f4a99a0b (image=quay.io/ceph/ceph:v20, name=ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mon-compute-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.773 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.774 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.775 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.775 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.775 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.776 239460 INFO nova.compute.manager [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Terminating instance#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.777 239460 DEBUG nova.compute.manager [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.822 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:16 np0005601226 kernel: tape3ded0c8-e7 (unregistering): left promiscuous mode
Jan 29 12:33:16 np0005601226 NetworkManager[49020]: <info>  [1769707996.8806] device (tape3ded0c8-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.886 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:33:16Z|00236|binding|INFO|Releasing lport e3ded0c8-e7b9-4534-8420-a68a252cbfce from this chassis (sb_readonly=0)
Jan 29 12:33:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:33:16Z|00237|binding|INFO|Setting lport e3ded0c8-e7b9-4534-8420-a68a252cbfce down in Southbound
Jan 29 12:33:16 np0005601226 ovn_controller[145556]: 2026-01-29T17:33:16Z|00238|binding|INFO|Removing iface tape3ded0c8-e7 ovn-installed in OVS
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.891 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:16 np0005601226 nova_compute[239456]: 2026-01-29 17:33:16.897 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:16.899 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:cb:1d 10.100.0.12'], port_security=['fa:16:3e:05:cb:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a63c1430-ea41-4d52-8ba3-4122d88a6621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-25cf1715-f178-4f65-be7c-cf203c28f072', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c74297072cc041019fc7ff4bff1a0f08', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b52d9814-61c4-42dd-84af-517b84e36907', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3018d9f1-e4b1-490d-94c7-3ffd5dd36627, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=e3ded0c8-e7b9-4534-8420-a68a252cbfce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:33:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:16.900 155625 INFO neutron.agent.ovn.metadata.agent [-] Port e3ded0c8-e7b9-4534-8420-a68a252cbfce in datapath 25cf1715-f178-4f65-be7c-cf203c28f072 unbound from our chassis#033[00m
Jan 29 12:33:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:16.901 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 25cf1715-f178-4f65-be7c-cf203c28f072, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:33:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:16.903 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[865631b1-0be2-438a-9cba-ec0584b16abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:16.903 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 namespace which is not needed anymore#033[00m
Jan 29 12:33:16 np0005601226 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Jan 29 12:33:16 np0005601226 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 15.648s CPU time.
Jan 29 12:33:16 np0005601226 systemd-machined[207561]: Machine qemu-25-instance-00000019 terminated.
Jan 29 12:33:17 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[270738]: [NOTICE]   (270742) : haproxy version is 2.8.14-c23fe91
Jan 29 12:33:17 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[270738]: [NOTICE]   (270742) : path to executable is /usr/sbin/haproxy
Jan 29 12:33:17 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[270738]: [ALERT]    (270742) : Current worker (270744) exited with code 143 (Terminated)
Jan 29 12:33:17 np0005601226 neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072[270738]: [WARNING]  (270742) : All workers exited. Exiting... (0)
Jan 29 12:33:17 np0005601226 systemd[1]: libpod-cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525.scope: Deactivated successfully.
Jan 29 12:33:17 np0005601226 podman[271067]: 2026-01-29 17:33:17.029872836 +0000 UTC m=+0.041151000 container died cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.037 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.041 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.053 239460 INFO nova.virt.libvirt.driver [-] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Instance destroyed successfully.#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.053 239460 DEBUG nova.objects.instance [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lazy-loading 'resources' on Instance uuid a63c1430-ea41-4d52-8ba3-4122d88a6621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:33:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525-userdata-shm.mount: Deactivated successfully.
Jan 29 12:33:17 np0005601226 systemd[1]: var-lib-containers-storage-overlay-24c2e97b2448cdfcde66996d64f8eb3954020cdfe21b672eef794b9ab25ecb23-merged.mount: Deactivated successfully.
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.075 239460 DEBUG nova.virt.libvirt.vif [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:32:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1713674902',display_name='tempest-TransferEncryptedVolumeTest-server-1713674902',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1713674902',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuw1UleBgXrkrvixGGBn21sDEJH6+FrkgFvq6jv3D3khmeyc7tU6zH/hmJ8BmjXmJToJI+73AcA0H8QCIrilSaG34LfS65uhiBlMWUY7wThjQ0H0WSLw5MFEF4DjDh1dA==',key_name='tempest-TransferEncryptedVolumeTest-895765981',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c74297072cc041019fc7ff4bff1a0f08',ramdisk_id='',reservation_id='r-rileg2pv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1262552887',owner_user_name='tempest-TransferEncryptedVolumeTest-1262552887-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:32:39Z,user_data=None,user_id='4f278bc1afe946ca991a0203a74c5a7f',uuid=a63c1430-ea41-4d52-8ba3-4122d88a6621,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.075 239460 DEBUG nova.network.os_vif_util [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converting VIF {"id": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "address": "fa:16:3e:05:cb:1d", "network": {"id": "25cf1715-f178-4f65-be7c-cf203c28f072", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-516658087-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c74297072cc041019fc7ff4bff1a0f08", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3ded0c8-e7", "ovs_interfaceid": "e3ded0c8-e7b9-4534-8420-a68a252cbfce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.076 239460 DEBUG nova.network.os_vif_util [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:cb:1d,bridge_name='br-int',has_traffic_filtering=True,id=e3ded0c8-e7b9-4534-8420-a68a252cbfce,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3ded0c8-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.076 239460 DEBUG os_vif [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:cb:1d,bridge_name='br-int',has_traffic_filtering=True,id=e3ded0c8-e7b9-4534-8420-a68a252cbfce,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3ded0c8-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.077 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.078 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3ded0c8-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:33:17 np0005601226 podman[271067]: 2026-01-29 17:33:17.07831847 +0000 UTC m=+0.089596634 container cleanup cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.079 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.081 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:33:17 np0005601226 systemd[1]: libpod-conmon-cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525.scope: Deactivated successfully.
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.084 239460 INFO os_vif [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:cb:1d,bridge_name='br-int',has_traffic_filtering=True,id=e3ded0c8-e7b9-4534-8420-a68a252cbfce,network=Network(25cf1715-f178-4f65-be7c-cf203c28f072),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3ded0c8-e7')#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.108 239460 DEBUG nova.compute.manager [req-d18029e2-bf90-4102-b141-dc842c4eac21 req-c6ab36ae-b8bc-4ee8-a1b6-ffb0fbab03e6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-vif-unplugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.109 239460 DEBUG oslo_concurrency.lockutils [req-d18029e2-bf90-4102-b141-dc842c4eac21 req-c6ab36ae-b8bc-4ee8-a1b6-ffb0fbab03e6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.109 239460 DEBUG oslo_concurrency.lockutils [req-d18029e2-bf90-4102-b141-dc842c4eac21 req-c6ab36ae-b8bc-4ee8-a1b6-ffb0fbab03e6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.109 239460 DEBUG oslo_concurrency.lockutils [req-d18029e2-bf90-4102-b141-dc842c4eac21 req-c6ab36ae-b8bc-4ee8-a1b6-ffb0fbab03e6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.109 239460 DEBUG nova.compute.manager [req-d18029e2-bf90-4102-b141-dc842c4eac21 req-c6ab36ae-b8bc-4ee8-a1b6-ffb0fbab03e6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] No waiting events found dispatching network-vif-unplugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.110 239460 DEBUG nova.compute.manager [req-d18029e2-bf90-4102-b141-dc842c4eac21 req-c6ab36ae-b8bc-4ee8-a1b6-ffb0fbab03e6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-vif-unplugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:33:17 np0005601226 podman[271129]: 2026-01-29 17:33:17.139807785 +0000 UTC m=+0.043218504 container remove cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.142 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1004ee85-95b2-4e41-b6cb-0fdd25cdd807]: (4, ('Thu Jan 29 05:33:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525)\ncc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525\nThu Jan 29 05:33:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 (cc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525)\ncc0f72dcf2b17b3a6cf3c0b7a942a919b30915295e9a3e0f428cfaf7a9faf525\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.144 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[85c537b8-c106-483a-b588-ed0e09c7d9f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.145 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25cf1715-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.148 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:17 np0005601226 kernel: tap25cf1715-f0: left promiscuous mode
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.152 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.155 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1e60caca-5509-46a8-bb10-40eb20d85f29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.168 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d6395a-ce5d-4fed-b1ad-6b8b93c4d3ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.169 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe59ee0-a114-4450-93db-44960a1d93fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.181 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[41861cdf-13f0-4d01-a02f-307fba94b467]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531009, 'reachable_time': 26557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271177, 'error': None, 'target': 'ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.183 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-25cf1715-f178-4f65-be7c-cf203c28f072 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:33:17 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:17.183 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[bb726cd7-7b8d-4a74-9727-1cacc08e1f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:33:17 np0005601226 systemd[1]: run-netns-ovnmeta\x2d25cf1715\x2df178\x2d4f65\x2dbe7c\x2dcf203c28f072.mount: Deactivated successfully.
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.250 239460 INFO nova.virt.libvirt.driver [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Deleting instance files /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621_del#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.251 239460 INFO nova.virt.libvirt.driver [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Deletion of /var/lib/nova/instances/a63c1430-ea41-4d52-8ba3-4122d88a6621_del complete#033[00m
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.347 239460 INFO nova.compute.manager [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.348 239460 DEBUG oslo.service.loopingcall [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.348 239460 DEBUG nova.compute.manager [-] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:33:17 np0005601226 nova_compute[239456]: 2026-01-29 17:33:17.348 239460 DEBUG nova.network.neutron [-] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:33:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:33:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 9.7 KiB/s wr, 8 op/s
Jan 29 12:33:18 np0005601226 nova_compute[239456]: 2026-01-29 17:33:18.070 239460 DEBUG nova.network.neutron [-] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:33:18 np0005601226 nova_compute[239456]: 2026-01-29 17:33:18.103 239460 INFO nova.compute.manager [-] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Took 0.76 seconds to deallocate network for instance.#033[00m
Jan 29 12:33:18 np0005601226 podman[271338]: 2026-01-29 17:33:18.107617369 +0000 UTC m=+0.039346500 container create 3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:33:18 np0005601226 systemd[1]: Started libpod-conmon-3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019.scope.
Jan 29 12:33:18 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:33:18 np0005601226 podman[271338]: 2026-01-29 17:33:18.174266833 +0000 UTC m=+0.105995965 container init 3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:33:18 np0005601226 podman[271338]: 2026-01-29 17:33:18.178522738 +0000 UTC m=+0.110251869 container start 3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:33:18 np0005601226 nifty_lumiere[271355]: 167 167
Jan 29 12:33:18 np0005601226 podman[271338]: 2026-01-29 17:33:18.182597248 +0000 UTC m=+0.114326399 container attach 3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:33:18 np0005601226 systemd[1]: libpod-3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019.scope: Deactivated successfully.
Jan 29 12:33:18 np0005601226 podman[271338]: 2026-01-29 17:33:18.184483768 +0000 UTC m=+0.116212889 container died 3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:33:18 np0005601226 podman[271338]: 2026-01-29 17:33:18.093097038 +0000 UTC m=+0.024826189 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:33:18 np0005601226 systemd[1]: var-lib-containers-storage-overlay-82412e7d3685256ba70b57be4cba3dc0fdd78f086afdb2d237cd17fea230f3c2-merged.mount: Deactivated successfully.
Jan 29 12:33:18 np0005601226 podman[271338]: 2026-01-29 17:33:18.224960448 +0000 UTC m=+0.156689569 container remove 3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 12:33:18 np0005601226 systemd[1]: libpod-conmon-3724cbe1e1ec05f6737c20eba905ef59454395b607795cbb069d57d876924019.scope: Deactivated successfully.
Jan 29 12:33:18 np0005601226 podman[271379]: 2026-01-29 17:33:18.339516842 +0000 UTC m=+0.035860497 container create 2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:33:18 np0005601226 nova_compute[239456]: 2026-01-29 17:33:18.338 239460 INFO nova.compute.manager [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:33:18 np0005601226 systemd[1]: Started libpod-conmon-2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4.scope.
Jan 29 12:33:18 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:33:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b462822994a14ee87ebeb8c43c4b0dede6590445229d9f54789ad0229498c7b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b462822994a14ee87ebeb8c43c4b0dede6590445229d9f54789ad0229498c7b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b462822994a14ee87ebeb8c43c4b0dede6590445229d9f54789ad0229498c7b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b462822994a14ee87ebeb8c43c4b0dede6590445229d9f54789ad0229498c7b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:18 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b462822994a14ee87ebeb8c43c4b0dede6590445229d9f54789ad0229498c7b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:18 np0005601226 nova_compute[239456]: 2026-01-29 17:33:18.414 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:18 np0005601226 nova_compute[239456]: 2026-01-29 17:33:18.415 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:18 np0005601226 podman[271379]: 2026-01-29 17:33:18.325917146 +0000 UTC m=+0.022260791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:33:18 np0005601226 podman[271379]: 2026-01-29 17:33:18.423817471 +0000 UTC m=+0.120161196 container init 2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclaren, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 12:33:18 np0005601226 podman[271379]: 2026-01-29 17:33:18.439126844 +0000 UTC m=+0.135470519 container start 2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclaren, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:33:18 np0005601226 podman[271379]: 2026-01-29 17:33:18.443927073 +0000 UTC m=+0.140270748 container attach 2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:33:18 np0005601226 nova_compute[239456]: 2026-01-29 17:33:18.471 239460 DEBUG oslo_concurrency.processutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:33:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:33:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:18 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:33:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Jan 29 12:33:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Jan 29 12:33:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Jan 29 12:33:18 np0005601226 happy_mclaren[271395]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:33:18 np0005601226 happy_mclaren[271395]: --> All data devices are unavailable
Jan 29 12:33:18 np0005601226 systemd[1]: libpod-2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4.scope: Deactivated successfully.
Jan 29 12:33:18 np0005601226 podman[271379]: 2026-01-29 17:33:18.918144039 +0000 UTC m=+0.614487684 container died 2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclaren, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:33:18 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b462822994a14ee87ebeb8c43c4b0dede6590445229d9f54789ad0229498c7b4-merged.mount: Deactivated successfully.
Jan 29 12:33:18 np0005601226 podman[271379]: 2026-01-29 17:33:18.98058669 +0000 UTC m=+0.676930365 container remove 2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:33:19 np0005601226 systemd[1]: libpod-conmon-2b18eb24d6241b2ca031c27dd1d43b48b578725e96606f3b379e9167300b87b4.scope: Deactivated successfully.
Jan 29 12:33:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:33:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2815366720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.044 239460 DEBUG oslo_concurrency.processutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.050 239460 DEBUG nova.compute.provider_tree [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.072 239460 DEBUG nova.scheduler.client.report [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.103 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.151 239460 INFO nova.scheduler.client.report [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Deleted allocations for instance a63c1430-ea41-4d52-8ba3-4122d88a6621#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.236 239460 DEBUG nova.compute.manager [req-bd0b4dfc-edb5-4f24-b3a0-4b48be8cfb35 req-78751284-d4d7-4f31-aff8-6d5216fbe9aa 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.237 239460 DEBUG oslo_concurrency.lockutils [req-bd0b4dfc-edb5-4f24-b3a0-4b48be8cfb35 req-78751284-d4d7-4f31-aff8-6d5216fbe9aa 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.237 239460 DEBUG oslo_concurrency.lockutils [req-bd0b4dfc-edb5-4f24-b3a0-4b48be8cfb35 req-78751284-d4d7-4f31-aff8-6d5216fbe9aa 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.237 239460 DEBUG oslo_concurrency.lockutils [req-bd0b4dfc-edb5-4f24-b3a0-4b48be8cfb35 req-78751284-d4d7-4f31-aff8-6d5216fbe9aa 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.237 239460 DEBUG nova.compute.manager [req-bd0b4dfc-edb5-4f24-b3a0-4b48be8cfb35 req-78751284-d4d7-4f31-aff8-6d5216fbe9aa 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] No waiting events found dispatching network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.237 239460 WARNING nova.compute.manager [req-bd0b4dfc-edb5-4f24-b3a0-4b48be8cfb35 req-78751284-d4d7-4f31-aff8-6d5216fbe9aa 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received unexpected event network-vif-plugged-e3ded0c8-e7b9-4534-8420-a68a252cbfce for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.238 239460 DEBUG nova.compute.manager [req-bd0b4dfc-edb5-4f24-b3a0-4b48be8cfb35 req-78751284-d4d7-4f31-aff8-6d5216fbe9aa 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Received event network-vif-deleted-e3ded0c8-e7b9-4534-8420-a68a252cbfce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:33:19 np0005601226 nova_compute[239456]: 2026-01-29 17:33:19.318 239460 DEBUG oslo_concurrency.lockutils [None req-f6ebc12a-f2d6-4866-b0cc-21703adccc59 4f278bc1afe946ca991a0203a74c5a7f c74297072cc041019fc7ff4bff1a0f08 - - default default] Lock "a63c1430-ea41-4d52-8ba3-4122d88a6621" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:19 np0005601226 podman[271512]: 2026-01-29 17:33:19.34684645 +0000 UTC m=+0.036057971 container create c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 29 12:33:19 np0005601226 systemd[1]: Started libpod-conmon-c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5.scope.
Jan 29 12:33:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:33:19 np0005601226 podman[271512]: 2026-01-29 17:33:19.406593388 +0000 UTC m=+0.095804899 container init c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kirch, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:33:19 np0005601226 podman[271512]: 2026-01-29 17:33:19.411024588 +0000 UTC m=+0.100236089 container start c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kirch, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:33:19 np0005601226 romantic_kirch[271530]: 167 167
Jan 29 12:33:19 np0005601226 podman[271512]: 2026-01-29 17:33:19.414358258 +0000 UTC m=+0.103569759 container attach c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 29 12:33:19 np0005601226 systemd[1]: libpod-c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5.scope: Deactivated successfully.
Jan 29 12:33:19 np0005601226 podman[271512]: 2026-01-29 17:33:19.415073567 +0000 UTC m=+0.104285068 container died c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:33:19 np0005601226 podman[271512]: 2026-01-29 17:33:19.329874143 +0000 UTC m=+0.019085664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:33:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-186728f5d730e2d7b2e4093cd3930bcddef4c9fb5dc592b6e2dbc0f817ad7df7-merged.mount: Deactivated successfully.
Jan 29 12:33:19 np0005601226 podman[271512]: 2026-01-29 17:33:19.455990628 +0000 UTC m=+0.145202129 container remove c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:33:19 np0005601226 systemd[1]: libpod-conmon-c200aa13f1d3a2b398d9b40b80649ed4c1782ac3260fed87696f625ee11faff5.scope: Deactivated successfully.
Jan 29 12:33:19 np0005601226 podman[271555]: 2026-01-29 17:33:19.562337831 +0000 UTC m=+0.037442358 container create 45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:33:19 np0005601226 systemd[1]: Started libpod-conmon-45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2.scope.
Jan 29 12:33:19 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:33:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdeecce308db4d34a68a5bc7f23046135879368bc0436d085a90a50c55a70b79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdeecce308db4d34a68a5bc7f23046135879368bc0436d085a90a50c55a70b79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdeecce308db4d34a68a5bc7f23046135879368bc0436d085a90a50c55a70b79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:19 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdeecce308db4d34a68a5bc7f23046135879368bc0436d085a90a50c55a70b79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:19 np0005601226 podman[271555]: 2026-01-29 17:33:19.54260459 +0000 UTC m=+0.017709137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:33:19 np0005601226 podman[271555]: 2026-01-29 17:33:19.640780833 +0000 UTC m=+0.115885410 container init 45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 29 12:33:19 np0005601226 podman[271555]: 2026-01-29 17:33:19.649445727 +0000 UTC m=+0.124550254 container start 45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:33:19 np0005601226 podman[271555]: 2026-01-29 17:33:19.652891799 +0000 UTC m=+0.127996546 container attach 45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_diffie, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]: {
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:    "0": [
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:        {
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "devices": [
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "/dev/loop3"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            ],
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_name": "ceph_lv0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_size": "21470642176",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "name": "ceph_lv0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "tags": {
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cluster_name": "ceph",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.crush_device_class": "",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.encrypted": "0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.objectstore": "bluestore",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osd_id": "0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.type": "block",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.vdo": "0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.with_tpm": "0"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            },
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "type": "block",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "vg_name": "ceph_vg0"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:        }
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:    ],
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:    "1": [
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:        {
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "devices": [
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "/dev/loop4"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            ],
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_name": "ceph_lv1",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_size": "21470642176",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "name": "ceph_lv1",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "tags": {
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cluster_name": "ceph",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.crush_device_class": "",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.encrypted": "0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.objectstore": "bluestore",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osd_id": "1",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.type": "block",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.vdo": "0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.with_tpm": "0"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            },
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "type": "block",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "vg_name": "ceph_vg1"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:        }
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:    ],
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:    "2": [
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:        {
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "devices": [
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "/dev/loop5"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            ],
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_name": "ceph_lv2",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_size": "21470642176",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "name": "ceph_lv2",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "tags": {
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.cluster_name": "ceph",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.crush_device_class": "",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.encrypted": "0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.objectstore": "bluestore",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osd_id": "2",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.type": "block",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.vdo": "0",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:                "ceph.with_tpm": "0"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            },
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "type": "block",
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:            "vg_name": "ceph_vg2"
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:        }
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]:    ]
Jan 29 12:33:19 np0005601226 hardcore_diffie[271571]: }
Jan 29 12:33:19 np0005601226 podman[271555]: 2026-01-29 17:33:19.925346114 +0000 UTC m=+0.400450661 container died 45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 12:33:19 np0005601226 systemd[1]: libpod-45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2.scope: Deactivated successfully.
Jan 29 12:33:19 np0005601226 systemd[1]: var-lib-containers-storage-overlay-cdeecce308db4d34a68a5bc7f23046135879368bc0436d085a90a50c55a70b79-merged.mount: Deactivated successfully.
Jan 29 12:33:19 np0005601226 podman[271555]: 2026-01-29 17:33:19.971940048 +0000 UTC m=+0.447044575 container remove 45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 29 12:33:19 np0005601226 systemd[1]: libpod-conmon-45ca88a80b2a4ba21a63cee3f60fb48fc37a78bb345764f95adc412909f45ae2.scope: Deactivated successfully.
Jan 29 12:33:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 5.6 KiB/s wr, 35 op/s
Jan 29 12:33:20 np0005601226 podman[271656]: 2026-01-29 17:33:20.402469948 +0000 UTC m=+0.041604811 container create d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 29 12:33:20 np0005601226 systemd[1]: Started libpod-conmon-d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12.scope.
Jan 29 12:33:20 np0005601226 podman[271656]: 2026-01-29 17:33:20.387185636 +0000 UTC m=+0.026320519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:33:20 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:33:20 np0005601226 podman[271656]: 2026-01-29 17:33:20.498855773 +0000 UTC m=+0.137990716 container init d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldwasser, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:33:20 np0005601226 podman[271656]: 2026-01-29 17:33:20.505282256 +0000 UTC m=+0.144417119 container start d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:33:20 np0005601226 podman[271656]: 2026-01-29 17:33:20.508505113 +0000 UTC m=+0.147639996 container attach d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:33:20 np0005601226 adoring_goldwasser[271674]: 167 167
Jan 29 12:33:20 np0005601226 systemd[1]: libpod-d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12.scope: Deactivated successfully.
Jan 29 12:33:20 np0005601226 podman[271656]: 2026-01-29 17:33:20.511641237 +0000 UTC m=+0.150776120 container died d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 12:33:20 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0c969dd818f32f7aa198f2d45ba65889a1cc6dd241ef86b37b358d6579a12000-merged.mount: Deactivated successfully.
Jan 29 12:33:20 np0005601226 podman[271656]: 2026-01-29 17:33:20.550277897 +0000 UTC m=+0.189412760 container remove d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_goldwasser, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:33:20 np0005601226 systemd[1]: libpod-conmon-d10608c6505200a62e00ebf7e7850a7c3b03fc701ec7c58f1d4e96e16ba3ff12.scope: Deactivated successfully.
Jan 29 12:33:20 np0005601226 podman[271698]: 2026-01-29 17:33:20.703826511 +0000 UTC m=+0.047009077 container create fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:33:20 np0005601226 systemd[1]: Started libpod-conmon-fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af.scope.
Jan 29 12:33:20 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:33:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16c454f60f7075e3251a58c746452280b2eb05533c71b920b5e939a17bdc1cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16c454f60f7075e3251a58c746452280b2eb05533c71b920b5e939a17bdc1cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16c454f60f7075e3251a58c746452280b2eb05533c71b920b5e939a17bdc1cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:20 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d16c454f60f7075e3251a58c746452280b2eb05533c71b920b5e939a17bdc1cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:33:20 np0005601226 podman[271698]: 2026-01-29 17:33:20.67928313 +0000 UTC m=+0.022465716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:33:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [WRN] : Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 29 12:33:21 np0005601226 podman[271698]: 2026-01-29 17:33:21.112541863 +0000 UTC m=+0.455724459 container init fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:33:21 np0005601226 podman[271698]: 2026-01-29 17:33:21.121645489 +0000 UTC m=+0.464828055 container start fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 12:33:21 np0005601226 podman[271698]: 2026-01-29 17:33:21.203280906 +0000 UTC m=+0.546463482 container attach fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:33:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).mds e5 check_health: resetting beacon timeouts due to mon delay (slow election?) of 1e+01 seconds
Jan 29 12:33:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:21 np0005601226 lvm[271794]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:33:21 np0005601226 lvm[271791]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:33:21 np0005601226 lvm[271794]: VG ceph_vg1 finished
Jan 29 12:33:21 np0005601226 lvm[271791]: VG ceph_vg0 finished
Jan 29 12:33:21 np0005601226 nova_compute[239456]: 2026-01-29 17:33:21.854 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:21 np0005601226 lvm[271796]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:33:21 np0005601226 lvm[271796]: VG ceph_vg2 finished
Jan 29 12:33:21 np0005601226 jolly_hoover[271715]: {}
Jan 29 12:33:22 np0005601226 systemd[1]: libpod-fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af.scope: Deactivated successfully.
Jan 29 12:33:22 np0005601226 podman[271698]: 2026-01-29 17:33:22.003564191 +0000 UTC m=+1.346746747 container died fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hoover, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 12:33:22 np0005601226 systemd[1]: libpod-fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af.scope: Consumed 1.314s CPU time.
Jan 29 12:33:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 270 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 8.5 KiB/s wr, 75 op/s
Jan 29 12:33:22 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d16c454f60f7075e3251a58c746452280b2eb05533c71b920b5e939a17bdc1cd-merged.mount: Deactivated successfully.
Jan 29 12:33:22 np0005601226 podman[271698]: 2026-01-29 17:33:22.067020289 +0000 UTC m=+1.410202825 container remove fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hoover, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:33:22 np0005601226 systemd[1]: libpod-conmon-fb107668788444f4d7c2983cb565c886f35260fcd2afec5428468bda289d93af.scope: Deactivated successfully.
Jan 29 12:33:22 np0005601226 nova_compute[239456]: 2026-01-29 17:33:22.079 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: Health check update: 2 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190505378' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2190505378' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:23 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:23 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:33:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 229 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 9.2 KiB/s wr, 82 op/s
Jan 29 12:33:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 88 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 452 KiB/s rd, 12 KiB/s wr, 191 op/s
Jan 29 12:33:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:26 np0005601226 nova_compute[239456]: 2026-01-29 17:33:26.773 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:26 np0005601226 nova_compute[239456]: 2026-01-29 17:33:26.779 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/424558209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/424558209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:26 np0005601226 nova_compute[239456]: 2026-01-29 17:33:26.856 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:27 np0005601226 nova_compute[239456]: 2026-01-29 17:33:27.080 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 88 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 11 KiB/s wr, 153 op/s
Jan 29 12:33:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Jan 29 12:33:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Jan 29 12:33:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Jan 29 12:33:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172827630' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172827630' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 4.4 KiB/s wr, 174 op/s
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374619435' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/374619435' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/432516455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/432516455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471079797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:31 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1471079797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:31 np0005601226 nova_compute[239456]: 2026-01-29 17:33:31.862 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 5.7 KiB/s wr, 253 op/s
Jan 29 12:33:32 np0005601226 nova_compute[239456]: 2026-01-29 17:33:32.048 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769707997.046914, a63c1430-ea41-4d52-8ba3-4122d88a6621 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:33:32 np0005601226 nova_compute[239456]: 2026-01-29 17:33:32.048 239460 INFO nova.compute.manager [-] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:33:32 np0005601226 nova_compute[239456]: 2026-01-29 17:33:32.077 239460 DEBUG nova.compute.manager [None req-7268ab16-5f5c-4967-962a-74b82b2756a7 - - - - - -] [instance: a63c1430-ea41-4d52-8ba3-4122d88a6621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:33:32 np0005601226 nova_compute[239456]: 2026-01-29 17:33:32.083 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:32 np0005601226 nova_compute[239456]: 2026-01-29 17:33:32.647 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:32 np0005601226 nova_compute[239456]: 2026-01-29 17:33:32.648 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:33:32 np0005601226 podman[271836]: 2026-01-29 17:33:32.917968499 +0000 UTC m=+0.071626635 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:33:32 np0005601226 podman[271837]: 2026-01-29 17:33:32.992296656 +0000 UTC m=+0.144210035 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:33:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 2.2 KiB/s wr, 141 op/s
Jan 29 12:33:34 np0005601226 nova_compute[239456]: 2026-01-29 17:33:34.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Jan 29 12:33:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Jan 29 12:33:35 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Jan 29 12:33:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 3.4 KiB/s wr, 174 op/s
Jan 29 12:33:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Jan 29 12:33:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Jan 29 12:33:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Jan 29 12:33:36 np0005601226 nova_compute[239456]: 2026-01-29 17:33:36.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:36 np0005601226 nova_compute[239456]: 2026-01-29 17:33:36.903 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.085 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.637 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.637 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.637 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.638 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:33:37 np0005601226 nova_compute[239456]: 2026-01-29 17:33:37.638 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:33:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 2.8 KiB/s wr, 115 op/s
Jan 29 12:33:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:33:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4423803' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.139 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.331 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.332 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4372MB free_disk=59.988158830441535GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.332 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.332 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Jan 29 12:33:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Jan 29 12:33:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.640 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.640 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:33:38 np0005601226 nova_compute[239456]: 2026-01-29 17:33:38.692 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:33:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:33:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548041199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:33:39 np0005601226 nova_compute[239456]: 2026-01-29 17:33:39.178 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:33:39 np0005601226 nova_compute[239456]: 2026-01-29 17:33:39.185 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:33:39 np0005601226 nova_compute[239456]: 2026-01-29 17:33:39.234 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:33:39 np0005601226 nova_compute[239456]: 2026-01-29 17:33:39.266 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:33:39 np0005601226 nova_compute[239456]: 2026-01-29 17:33:39.267 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:39 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:39.301 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:33:39 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:39.303 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:33:39 np0005601226 nova_compute[239456]: 2026-01-29 17:33:39.339 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.2 KiB/s wr, 74 op/s
Jan 29 12:33:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:40.294 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:33:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:40.295 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:33:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:40.295 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:33:40
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'default.rgw.control']
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:33:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:33:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:41 np0005601226 nova_compute[239456]: 2026-01-29 17:33:41.943 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 4.1 KiB/s wr, 68 op/s
Jan 29 12:33:42 np0005601226 nova_compute[239456]: 2026-01-29 17:33:42.087 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.2 KiB/s wr, 54 op/s
Jan 29 12:33:44 np0005601226 nova_compute[239456]: 2026-01-29 17:33:44.263 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:44 np0005601226 nova_compute[239456]: 2026-01-29 17:33:44.264 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:44 np0005601226 nova_compute[239456]: 2026-01-29 17:33:44.264 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:33:44 np0005601226 nova_compute[239456]: 2026-01-29 17:33:44.265 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:33:44 np0005601226 nova_compute[239456]: 2026-01-29 17:33:44.281 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:33:44 np0005601226 nova_compute[239456]: 2026-01-29 17:33:44.281 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Jan 29 12:33:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Jan 29 12:33:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Jan 29 12:33:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/926910498' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/926910498' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.5 KiB/s wr, 145 op/s
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/299950390' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:46 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/299950390' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:46 np0005601226 nova_compute[239456]: 2026-01-29 17:33:46.964 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:47 np0005601226 nova_compute[239456]: 2026-01-29 17:33:47.089 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1899805998' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:47 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1899805998' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 4.2 KiB/s wr, 119 op/s
Jan 29 12:33:48 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:33:48.306 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:33:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.4 KiB/s wr, 141 op/s
Jan 29 12:33:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Jan 29 12:33:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Jan 29 12:33:50 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Jan 29 12:33:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Jan 29 12:33:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Jan 29 12:33:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.0378524145359965e-06 of space, bias 1.0, pg target 0.0006113557243607989 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00036208915424787837 of space, bias 1.0, pg target 0.10862674627436351 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.595514254636144e-06 of space, bias 1.0, pg target 0.0010786542763908432 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669626157094533 of space, bias 1.0, pg target 0.20008878471283598 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.437389082901489e-06 of space, bias 4.0, pg target 0.0017248668994817866 quantized to 16 (current 16)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:33:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:33:51 np0005601226 nova_compute[239456]: 2026-01-29 17:33:51.996 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 2.5 KiB/s wr, 108 op/s
Jan 29 12:33:52 np0005601226 nova_compute[239456]: 2026-01-29 17:33:52.090 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:53 np0005601226 nova_compute[239456]: 2026-01-29 17:33:53.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:33:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.2 KiB/s wr, 87 op/s
Jan 29 12:33:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Jan 29 12:33:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Jan 29 12:33:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Jan 29 12:33:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3415921203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:55 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:55 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3415921203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:33:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 5.7 KiB/s wr, 110 op/s
Jan 29 12:33:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:33:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Jan 29 12:33:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Jan 29 12:33:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Jan 29 12:33:57 np0005601226 nova_compute[239456]: 2026-01-29 17:33:57.054 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:57 np0005601226 nova_compute[239456]: 2026-01-29 17:33:57.092 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:33:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Jan 29 12:33:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Jan 29 12:33:57 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Jan 29 12:33:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 4.0 KiB/s wr, 64 op/s
Jan 29 12:33:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:33:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1488900053' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:33:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:33:58 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1488900053' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 6.7 KiB/s wr, 162 op/s
Jan 29 12:34:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1121156740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:00 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1121156740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e443 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Jan 29 12:34:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Jan 29 12:34:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Jan 29 12:34:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2733918086' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2733918086' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 4.3 KiB/s wr, 128 op/s
Jan 29 12:34:02 np0005601226 nova_compute[239456]: 2026-01-29 17:34:02.091 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:03 np0005601226 podman[271924]: 2026-01-29 17:34:03.880016637 +0000 UTC m=+0.047627683 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 29 12:34:03 np0005601226 podman[271925]: 2026-01-29 17:34:03.949234366 +0000 UTC m=+0.107060238 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 29 12:34:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 88 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 3.7 KiB/s wr, 109 op/s
Jan 29 12:34:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 34 KiB/s wr, 120 op/s
Jan 29 12:34:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:07 np0005601226 nova_compute[239456]: 2026-01-29 17:34:07.094 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:34:07 np0005601226 nova_compute[239456]: 2026-01-29 17:34:07.096 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:34:07 np0005601226 nova_compute[239456]: 2026-01-29 17:34:07.096 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 29 12:34:07 np0005601226 nova_compute[239456]: 2026-01-29 17:34:07.096 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 29 12:34:07 np0005601226 nova_compute[239456]: 2026-01-29 17:34:07.144 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:07 np0005601226 nova_compute[239456]: 2026-01-29 17:34:07.145 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 29 12:34:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Jan 29 12:34:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Jan 29 12:34:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Jan 29 12:34:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 34 KiB/s wr, 53 op/s
Jan 29 12:34:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2011380862' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 32 KiB/s wr, 55 op/s
Jan 29 12:34:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:34:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:34:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:34:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:34:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:34:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:34:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Jan 29 12:34:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Jan 29 12:34:10 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Jan 29 12:34:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 35 KiB/s wr, 63 op/s
Jan 29 12:34:12 np0005601226 nova_compute[239456]: 2026-01-29 17:34:12.145 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:34:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.5 KiB/s wr, 40 op/s
Jan 29 12:34:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Jan 29 12:34:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Jan 29 12:34:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Jan 29 12:34:15 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:15Z|00239|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 29 12:34:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 4.4 KiB/s wr, 71 op/s
Jan 29 12:34:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e447 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.149 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.491 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.492 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.510 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.749 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.750 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.759 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.760 239460 INFO nova.compute.claims [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:34:17 np0005601226 nova_compute[239456]: 2026-01-29 17:34:17.902 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.6 KiB/s wr, 43 op/s
Jan 29 12:34:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Jan 29 12:34:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Jan 29 12:34:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Jan 29 12:34:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:34:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197369120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.442 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.449 239460 DEBUG nova.compute.provider_tree [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.469 239460 DEBUG nova.scheduler.client.report [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.501 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.502 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.563 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.564 239460 DEBUG nova.network.neutron [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.583 239460 INFO nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.607 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.740 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.742 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.743 239460 INFO nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Creating image(s)#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.778 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.821 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.862 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.869 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.901 239460 DEBUG nova.policy [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90bbb3ba09534f74aedaab7650ed5ba4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.959 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.961 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.962 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.963 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:18 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.992 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:18.999 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4141804173' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4141804173' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.321 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.405 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] resizing rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.494 239460 DEBUG nova.objects.instance [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'migration_context' on Instance uuid 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.566 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.566 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Ensure instance console log exists: /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.567 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.568 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.568 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:19 np0005601226 nova_compute[239456]: 2026-01-29 17:34:19.773 239460 DEBUG nova.network.neutron [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Successfully created port: 01cb5c50-a219-4070-87d9-991256087701 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:34:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 88 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Jan 29 12:34:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Jan 29 12:34:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Jan 29 12:34:20 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.605 239460 DEBUG nova.network.neutron [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Successfully updated port: 01cb5c50-a219-4070-87d9-991256087701 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.620 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.621 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquired lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.621 239460 DEBUG nova.network.neutron [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.738 239460 DEBUG nova.compute.manager [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-changed-01cb5c50-a219-4070-87d9-991256087701 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.739 239460 DEBUG nova.compute.manager [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Refreshing instance network info cache due to event network-changed-01cb5c50-a219-4070-87d9-991256087701. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.739 239460 DEBUG oslo_concurrency.lockutils [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:34:20 np0005601226 nova_compute[239456]: 2026-01-29 17:34:20.771 239460 DEBUG nova.network.neutron [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:34:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/885409779' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/885409779' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.615 239460 DEBUG nova.network.neutron [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updating instance_info_cache with network_info: [{"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.634 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Releasing lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.634 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Instance network_info: |[{"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.635 239460 DEBUG oslo_concurrency.lockutils [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.635 239460 DEBUG nova.network.neutron [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Refreshing network info cache for port 01cb5c50-a219-4070-87d9-991256087701 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.640 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Start _get_guest_xml network_info=[{"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.647 239460 WARNING nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.658 239460 DEBUG nova.virt.libvirt.host [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.658 239460 DEBUG nova.virt.libvirt.host [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.671 239460 DEBUG nova.virt.libvirt.host [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.673 239460 DEBUG nova.virt.libvirt.host [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.674 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.675 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.676 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.677 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.677 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.678 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.679 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.679 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.680 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.681 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.682 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.682 239460 DEBUG nova.virt.hardware [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:34:21 np0005601226 nova_compute[239456]: 2026-01-29 17:34:21.689 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 111 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 1.1 MiB/s wr, 89 op/s
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.180 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5032 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.182 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.183 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.184 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1162940308' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.300 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.332 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.339 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.828 239460 DEBUG nova.network.neutron [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updated VIF entry in instance network info cache for port 01cb5c50-a219-4070-87d9-991256087701. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.830 239460 DEBUG nova.network.neutron [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updating instance_info_cache with network_info: [{"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.849 239460 DEBUG oslo_concurrency.lockutils [req-adf843b2-b625-4cb1-abd2-0f0e5e2f6e4d req-2bec759a-2f9f-47b1-9f54-5953b63f9cff 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580513646' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.884 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.885 239460 DEBUG nova.virt.libvirt.vif [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:34:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-987208948',display_name='tempest-TestEncryptedCinderVolumes-server-987208948',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-987208948',id=26,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEiWyOmSEmMI4pwLqOKedCJq8UXqZek7QcQm1YGLuVaaKr+u7Y0eccysxWi4eVTnXO2KEU6T10OE9i6oP930f8wEjBWPLBpPePOuA4ghFCWhdIhwCWA42zHpIxVU2Gg7DQ==',key_name='tempest-keypair-186094087',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-uva102pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.886 239460 DEBUG nova.network.os_vif_util [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.888 239460 DEBUG nova.network.os_vif_util [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=01cb5c50-a219-4070-87d9-991256087701,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cb5c50-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.891 239460 DEBUG nova.objects.instance [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:34:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.908 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <uuid>30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7</uuid>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <name>instance-0000001a</name>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-987208948</nova:name>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:34:21</nova:creationTime>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:user uuid="90bbb3ba09534f74aedaab7650ed5ba4">tempest-TestEncryptedCinderVolumes-595928636-project-member</nova:user>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:project uuid="9c3315c8b4c543a38f07ec0c509f03c1">tempest-TestEncryptedCinderVolumes-595928636</nova:project>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <nova:port uuid="01cb5c50-a219-4070-87d9-991256087701">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <entry name="serial">30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7</entry>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <entry name="uuid">30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7</entry>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk.config">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:f6:b7:e6"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <target dev="tap01cb5c50-a2"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/console.log" append="off"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:34:22 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:34:22 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:34:22 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:34:22 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.909 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Preparing to wait for external event network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.909 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.909 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.910 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.910 239460 DEBUG nova.virt.libvirt.vif [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:34:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-987208948',display_name='tempest-TestEncryptedCinderVolumes-server-987208948',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-987208948',id=26,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEiWyOmSEmMI4pwLqOKedCJq8UXqZek7QcQm1YGLuVaaKr+u7Y0eccysxWi4eVTnXO2KEU6T10OE9i6oP930f8wEjBWPLBpPePOuA4ghFCWhdIhwCWA42zHpIxVU2Gg7DQ==',key_name='tempest-keypair-186094087',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-uva102pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:34:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.911 239460 DEBUG nova.network.os_vif_util [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.911 239460 DEBUG nova.network.os_vif_util [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=01cb5c50-a219-4070-87d9-991256087701,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cb5c50-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.912 239460 DEBUG os_vif [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=01cb5c50-a219-4070-87d9-991256087701,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cb5c50-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.912 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.913 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.913 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.916 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.916 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01cb5c50-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.917 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01cb5c50-a2, col_values=(('external_ids', {'iface-id': '01cb5c50-a219-4070-87d9-991256087701', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:b7:e6', 'vm-uuid': '30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.918 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:22 np0005601226 NetworkManager[49020]: <info>  [1769708062.9193] manager: (tap01cb5c50-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.921 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.923 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.924 239460 INFO os_vif [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=01cb5c50-a219-4070-87d9-991256087701,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cb5c50-a2')#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.972 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.973 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.973 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No VIF found with MAC fa:16:3e:f6:b7:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:34:22 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.973 239460 INFO nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Using config drive#033[00m
Jan 29 12:34:23 np0005601226 nova_compute[239456]: 2026-01-29 17:34:22.998 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:34:23 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:34:23 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:34:23 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:34:23 np0005601226 podman[272388]: 2026-01-29 17:34:23.295444756 +0000 UTC m=+0.063527255 container create c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 29 12:34:23 np0005601226 systemd[1]: Started libpod-conmon-c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49.scope.
Jan 29 12:34:23 np0005601226 podman[272388]: 2026-01-29 17:34:23.272079553 +0000 UTC m=+0.040162142 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:34:23 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:34:23 np0005601226 podman[272388]: 2026-01-29 17:34:23.39064125 +0000 UTC m=+0.158723849 container init c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:34:23 np0005601226 podman[272388]: 2026-01-29 17:34:23.399770028 +0000 UTC m=+0.167852537 container start c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 12:34:23 np0005601226 podman[272388]: 2026-01-29 17:34:23.403025486 +0000 UTC m=+0.171108055 container attach c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:34:23 np0005601226 keen_clarke[272404]: 167 167
Jan 29 12:34:23 np0005601226 systemd[1]: libpod-c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49.scope: Deactivated successfully.
Jan 29 12:34:23 np0005601226 podman[272388]: 2026-01-29 17:34:23.407338533 +0000 UTC m=+0.175421032 container died c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:34:23 np0005601226 systemd[1]: var-lib-containers-storage-overlay-140015f86a0e1cc933d12541ab5ecb7fd9b7d3cd90743f59fffbcb41e0c0b320-merged.mount: Deactivated successfully.
Jan 29 12:34:23 np0005601226 podman[272388]: 2026-01-29 17:34:23.469353776 +0000 UTC m=+0.237436295 container remove c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_clarke, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:34:23 np0005601226 systemd[1]: libpod-conmon-c6aa0037a6c3243f6c0dcf6c3f0502db436a80df344c012f3a846bb0fd055b49.scope: Deactivated successfully.
Jan 29 12:34:23 np0005601226 podman[272429]: 2026-01-29 17:34:23.642619938 +0000 UTC m=+0.058738135 container create 0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 12:34:23 np0005601226 systemd[1]: Started libpod-conmon-0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520.scope.
Jan 29 12:34:23 np0005601226 podman[272429]: 2026-01-29 17:34:23.612478841 +0000 UTC m=+0.028597098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:34:23 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:34:23 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df23a1af3e691412a9cde4cbb52a4c9db61f886e2dffb76b28d312893d3e809f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:23 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df23a1af3e691412a9cde4cbb52a4c9db61f886e2dffb76b28d312893d3e809f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:23 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df23a1af3e691412a9cde4cbb52a4c9db61f886e2dffb76b28d312893d3e809f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:23 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df23a1af3e691412a9cde4cbb52a4c9db61f886e2dffb76b28d312893d3e809f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:23 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df23a1af3e691412a9cde4cbb52a4c9db61f886e2dffb76b28d312893d3e809f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:23 np0005601226 podman[272429]: 2026-01-29 17:34:23.748916803 +0000 UTC m=+0.165034990 container init 0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:34:23 np0005601226 podman[272429]: 2026-01-29 17:34:23.763792487 +0000 UTC m=+0.179910654 container start 0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:34:23 np0005601226 podman[272429]: 2026-01-29 17:34:23.767144597 +0000 UTC m=+0.183262834 container attach 0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:34:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 122 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.151 239460 INFO nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Creating config drive at /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/disk.config#033[00m
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.163 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvual35br execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:24 np0005601226 heuristic_galois[272445]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:34:24 np0005601226 heuristic_galois[272445]: --> All data devices are unavailable
Jan 29 12:34:24 np0005601226 systemd[1]: libpod-0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520.scope: Deactivated successfully.
Jan 29 12:34:24 np0005601226 podman[272429]: 2026-01-29 17:34:24.264165336 +0000 UTC m=+0.680283483 container died 0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galois, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:34:24 np0005601226 systemd[1]: var-lib-containers-storage-overlay-df23a1af3e691412a9cde4cbb52a4c9db61f886e2dffb76b28d312893d3e809f-merged.mount: Deactivated successfully.
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.304 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvual35br" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:24 np0005601226 podman[272429]: 2026-01-29 17:34:24.312165608 +0000 UTC m=+0.728283805 container remove 0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 12:34:24 np0005601226 systemd[1]: libpod-conmon-0e4c9bd2bef5b93f7451fa16046ca4922a2f60c949fec1072f1f118bae51a520.scope: Deactivated successfully.
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.335 239460 DEBUG nova.storage.rbd_utils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.340 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/disk.config 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.530 239460 DEBUG oslo_concurrency.processutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/disk.config 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.532 239460 INFO nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Deleting local config drive /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7/disk.config because it was imported into RBD.#033[00m
Jan 29 12:34:24 np0005601226 kernel: tap01cb5c50-a2: entered promiscuous mode
Jan 29 12:34:24 np0005601226 NetworkManager[49020]: <info>  [1769708064.5755] manager: (tap01cb5c50-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.577 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:24 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:24Z|00240|binding|INFO|Claiming lport 01cb5c50-a219-4070-87d9-991256087701 for this chassis.
Jan 29 12:34:24 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:24Z|00241|binding|INFO|01cb5c50-a219-4070-87d9-991256087701: Claiming fa:16:3e:f6:b7:e6 10.100.0.7
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.588 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.593 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:b7:e6 10.100.0.7'], port_security=['fa:16:3e:f6:b7:e6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9275d605-e314-4c83-a4e8-f4ba085f6358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0b979b40-7ceb-4e92-9df1-dc3b0e6034d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a7d5cc-cff2-487b-9e34-0c3106da1b90, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=01cb5c50-a219-4070-87d9-991256087701) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.596 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 01cb5c50-a219-4070-87d9-991256087701 in datapath 9275d605-e314-4c83-a4e8-f4ba085f6358 bound to our chassis#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.598 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9275d605-e314-4c83-a4e8-f4ba085f6358#033[00m
Jan 29 12:34:24 np0005601226 systemd-machined[207561]: New machine qemu-26-instance-0000001a.
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.608 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[4124a29c-2b3f-4cb5-98e5-b0f4be779f68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.609 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9275d605-e1 in ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.612 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9275d605-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.612 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b14bb20a-5103-439c-b241-4a00ff0b1703]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.612 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[952f0dd2-3342-4c71-b6ab-ad779134c4e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.625 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2c8b98-9275-4bb3-9a75-460957c1444d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Jan 29 12:34:24 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:24Z|00242|binding|INFO|Setting lport 01cb5c50-a219-4070-87d9-991256087701 ovn-installed in OVS
Jan 29 12:34:24 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:24Z|00243|binding|INFO|Setting lport 01cb5c50-a219-4070-87d9-991256087701 up in Southbound
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.633 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.642 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b4e54a-d372-4621-a6a7-656c98217bff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 systemd-udevd[272585]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:34:24 np0005601226 NetworkManager[49020]: <info>  [1769708064.6688] device (tap01cb5c50-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:34:24 np0005601226 NetworkManager[49020]: <info>  [1769708064.6698] device (tap01cb5c50-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.680 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[c21d41fb-e2f1-4230-8cee-4ccddee8fb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 systemd-udevd[272590]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.690 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[53d28789-c35b-4d73-a8aa-62a3538d8f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 NetworkManager[49020]: <info>  [1769708064.6917] manager: (tap9275d605-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/130)
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.723 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc97109-612a-4ada-a650-438d9cbe1bc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.726 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9955d6-9105-4a86-a2c8-ab8d099289b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 NetworkManager[49020]: <info>  [1769708064.7514] device (tap9275d605-e0): carrier: link connected
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.754 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[df8893e1-e4fb-4fc3-95ff-63968597bdc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.767 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[170be68a-4f1e-4cf7-adc8-2867ed82e620]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9275d605-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:a6:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541825, 'reachable_time': 21755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272615, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.778 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[25dc9d69-8ac7-41c8-9fd5-20b4b5b51801]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:a635'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541825, 'tstamp': 541825}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272621, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.789 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb0ce62-84b5-420e-a3b9-e10906846541]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9275d605-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:a6:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541825, 'reachable_time': 21755, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272622, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.810 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[95e19589-06c4-4579-a198-da758f47c49c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.844 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4a995f-7f60-4696-9da6-79bad94966aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.845 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9275d605-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.846 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.846 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9275d605-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:34:24 np0005601226 kernel: tap9275d605-e0: entered promiscuous mode
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.848 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:24 np0005601226 NetworkManager[49020]: <info>  [1769708064.8498] manager: (tap9275d605-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.853 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9275d605-e0, col_values=(('external_ids', {'iface-id': 'e64dae33-380b-46eb-9272-7f8c7bc07367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:34:24 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:24Z|00244|binding|INFO|Releasing lport e64dae33-380b-46eb-9272-7f8c7bc07367 from this chassis (sb_readonly=0)
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.855 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:24 np0005601226 nova_compute[239456]: 2026-01-29 17:34:24.859 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.861 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.862 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[50256371-5c7f-42d6-a067-83c386b356d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.863 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-9275d605-e314-4c83-a4e8-f4ba085f6358
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 9275d605-e314-4c83-a4e8-f4ba085f6358
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:34:24 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:24.864 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'env', 'PROCESS_TAG=haproxy-9275d605-e314-4c83-a4e8-f4ba085f6358', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9275d605-e314-4c83-a4e8-f4ba085f6358.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:34:24 np0005601226 podman[272629]: 2026-01-29 17:34:24.867031367 +0000 UTC m=+0.058189221 container create 9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:34:24 np0005601226 podman[272629]: 2026-01-29 17:34:24.825917921 +0000 UTC m=+0.017075785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:34:24 np0005601226 systemd[1]: Started libpod-conmon-9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3.scope.
Jan 29 12:34:24 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:34:24 np0005601226 podman[272629]: 2026-01-29 17:34:24.977272838 +0000 UTC m=+0.168430792 container init 9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:34:24 np0005601226 podman[272629]: 2026-01-29 17:34:24.987930008 +0000 UTC m=+0.179087912 container start 9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:34:24 np0005601226 jovial_johnson[272650]: 167 167
Jan 29 12:34:24 np0005601226 systemd[1]: libpod-9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3.scope: Deactivated successfully.
Jan 29 12:34:24 np0005601226 conmon[272650]: conmon 9ab1fb7314beb3d89f68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3.scope/container/memory.events
Jan 29 12:34:24 np0005601226 podman[272629]: 2026-01-29 17:34:24.997552489 +0000 UTC m=+0.188710353 container attach 9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:34:24 np0005601226 podman[272629]: 2026-01-29 17:34:24.998859975 +0000 UTC m=+0.190017839 container died 9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:34:25 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7f9bde90630e229d5c1fb5634cf4c80bb286f7641364236155863f993410d60e-merged.mount: Deactivated successfully.
Jan 29 12:34:25 np0005601226 podman[272629]: 2026-01-29 17:34:25.065573355 +0000 UTC m=+0.256731269 container remove 9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:34:25 np0005601226 systemd[1]: libpod-conmon-9ab1fb7314beb3d89f68268211d877d452e74ce045ae88dee38b1c023e7993f3.scope: Deactivated successfully.
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.156 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708065.1554048, 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.156 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] VM Started (Lifecycle Event)#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.179 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.182 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708065.1601868, 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.182 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.201 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.204 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:34:25 np0005601226 podman[272735]: 2026-01-29 17:34:25.213701505 +0000 UTC m=+0.057257145 container create c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.228 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:34:25 np0005601226 podman[272746]: 2026-01-29 17:34:25.242894627 +0000 UTC m=+0.068493600 container create c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:34:25 np0005601226 systemd[1]: Started libpod-conmon-c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f.scope.
Jan 29 12:34:25 np0005601226 podman[272735]: 2026-01-29 17:34:25.176952918 +0000 UTC m=+0.020508578 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:34:25 np0005601226 systemd[1]: Started libpod-conmon-c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a.scope.
Jan 29 12:34:25 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:34:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/584787cdadd6b1288198af2294717317ba304b7c553dfbe5a65acc75536e999d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:25 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:34:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7596a0bbd01ea84d238a69671753dc945cb8014ce21abefd0699adc01e3d75eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7596a0bbd01ea84d238a69671753dc945cb8014ce21abefd0699adc01e3d75eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:25 np0005601226 podman[272746]: 2026-01-29 17:34:25.204234858 +0000 UTC m=+0.029833841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:34:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7596a0bbd01ea84d238a69671753dc945cb8014ce21abefd0699adc01e3d75eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:25 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7596a0bbd01ea84d238a69671753dc945cb8014ce21abefd0699adc01e3d75eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:25 np0005601226 podman[272735]: 2026-01-29 17:34:25.306513894 +0000 UTC m=+0.150069584 container init c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:34:25 np0005601226 podman[272746]: 2026-01-29 17:34:25.323664509 +0000 UTC m=+0.149263572 container init c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_saha, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 12:34:25 np0005601226 podman[272735]: 2026-01-29 17:34:25.325786086 +0000 UTC m=+0.169341736 container start c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 29 12:34:25 np0005601226 podman[272746]: 2026-01-29 17:34:25.337393002 +0000 UTC m=+0.162992005 container start c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_saha, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 29 12:34:25 np0005601226 podman[272746]: 2026-01-29 17:34:25.342559132 +0000 UTC m=+0.168158135 container attach c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 29 12:34:25 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[272769]: [NOTICE]   (272779) : New worker (272782) forked
Jan 29 12:34:25 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[272769]: [NOTICE]   (272779) : Loading success.
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.422 239460 DEBUG nova.compute.manager [req-45e320ab-3f3b-4d13-9b5e-ce42bc8c8f87 req-13c4fb8a-8a1c-44d0-bc1a-3a2391ab6106 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.423 239460 DEBUG oslo_concurrency.lockutils [req-45e320ab-3f3b-4d13-9b5e-ce42bc8c8f87 req-13c4fb8a-8a1c-44d0-bc1a-3a2391ab6106 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.423 239460 DEBUG oslo_concurrency.lockutils [req-45e320ab-3f3b-4d13-9b5e-ce42bc8c8f87 req-13c4fb8a-8a1c-44d0-bc1a-3a2391ab6106 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.424 239460 DEBUG oslo_concurrency.lockutils [req-45e320ab-3f3b-4d13-9b5e-ce42bc8c8f87 req-13c4fb8a-8a1c-44d0-bc1a-3a2391ab6106 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.424 239460 DEBUG nova.compute.manager [req-45e320ab-3f3b-4d13-9b5e-ce42bc8c8f87 req-13c4fb8a-8a1c-44d0-bc1a-3a2391ab6106 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Processing event network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.425 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.430 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708065.4301312, 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.431 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.433 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.439 239460 INFO nova.virt.libvirt.driver [-] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Instance spawned successfully.#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.439 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.464 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.472 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.477 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.478 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.478 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.479 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.480 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.480 239460 DEBUG nova.virt.libvirt.driver [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.490 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.537 239460 INFO nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Took 6.80 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.537 239460 DEBUG nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.599 239460 INFO nova.compute.manager [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Took 8.02 seconds to build instance.#033[00m
Jan 29 12:34:25 np0005601226 nova_compute[239456]: 2026-01-29 17:34:25.619 239460 DEBUG oslo_concurrency.lockutils [None req-c8f9db56-e623-49d0-bf6f-05b3421eed33 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:25 np0005601226 focused_saha[272774]: {
Jan 29 12:34:25 np0005601226 focused_saha[272774]:    "0": [
Jan 29 12:34:25 np0005601226 focused_saha[272774]:        {
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "devices": [
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "/dev/loop3"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            ],
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_name": "ceph_lv0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_size": "21470642176",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "name": "ceph_lv0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "tags": {
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cluster_name": "ceph",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.crush_device_class": "",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.encrypted": "0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.objectstore": "bluestore",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osd_id": "0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.type": "block",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.vdo": "0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.with_tpm": "0"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            },
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "type": "block",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "vg_name": "ceph_vg0"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:        }
Jan 29 12:34:25 np0005601226 focused_saha[272774]:    ],
Jan 29 12:34:25 np0005601226 focused_saha[272774]:    "1": [
Jan 29 12:34:25 np0005601226 focused_saha[272774]:        {
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "devices": [
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "/dev/loop4"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            ],
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_name": "ceph_lv1",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_size": "21470642176",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "name": "ceph_lv1",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "tags": {
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cluster_name": "ceph",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.crush_device_class": "",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.encrypted": "0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.objectstore": "bluestore",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osd_id": "1",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.type": "block",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.vdo": "0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.with_tpm": "0"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            },
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "type": "block",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "vg_name": "ceph_vg1"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:        }
Jan 29 12:34:25 np0005601226 focused_saha[272774]:    ],
Jan 29 12:34:25 np0005601226 focused_saha[272774]:    "2": [
Jan 29 12:34:25 np0005601226 focused_saha[272774]:        {
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "devices": [
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "/dev/loop5"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            ],
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_name": "ceph_lv2",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_size": "21470642176",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "name": "ceph_lv2",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "tags": {
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.cluster_name": "ceph",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.crush_device_class": "",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.encrypted": "0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.objectstore": "bluestore",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osd_id": "2",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.type": "block",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.vdo": "0",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:                "ceph.with_tpm": "0"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            },
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "type": "block",
Jan 29 12:34:25 np0005601226 focused_saha[272774]:            "vg_name": "ceph_vg2"
Jan 29 12:34:25 np0005601226 focused_saha[272774]:        }
Jan 29 12:34:25 np0005601226 focused_saha[272774]:    ]
Jan 29 12:34:25 np0005601226 focused_saha[272774]: }
Jan 29 12:34:25 np0005601226 systemd[1]: libpod-c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a.scope: Deactivated successfully.
Jan 29 12:34:25 np0005601226 podman[272795]: 2026-01-29 17:34:25.725645588 +0000 UTC m=+0.042936296 container died c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_saha, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:34:25 np0005601226 systemd[1]: var-lib-containers-storage-overlay-7596a0bbd01ea84d238a69671753dc945cb8014ce21abefd0699adc01e3d75eb-merged.mount: Deactivated successfully.
Jan 29 12:34:25 np0005601226 podman[272795]: 2026-01-29 17:34:25.785682558 +0000 UTC m=+0.102973256 container remove c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 12:34:25 np0005601226 systemd[1]: libpod-conmon-c2aad5b8f8b4ee6316d823e2c09805756ed2243ed7829f409a5a3d2e5421e75a.scope: Deactivated successfully.
Jan 29 12:34:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 2.7 MiB/s wr, 117 op/s
Jan 29 12:34:26 np0005601226 podman[272873]: 2026-01-29 17:34:26.322805094 +0000 UTC m=+0.069392624 container create cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:34:26 np0005601226 systemd[1]: Started libpod-conmon-cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c.scope.
Jan 29 12:34:26 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:34:26 np0005601226 podman[272873]: 2026-01-29 17:34:26.301820214 +0000 UTC m=+0.048407764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:34:26 np0005601226 podman[272873]: 2026-01-29 17:34:26.406825884 +0000 UTC m=+0.153413484 container init cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_liskov, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:34:26 np0005601226 podman[272873]: 2026-01-29 17:34:26.411506752 +0000 UTC m=+0.158094282 container start cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_liskov, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 29 12:34:26 np0005601226 condescending_liskov[272889]: 167 167
Jan 29 12:34:26 np0005601226 systemd[1]: libpod-cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c.scope: Deactivated successfully.
Jan 29 12:34:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e449 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Jan 29 12:34:26 np0005601226 podman[272873]: 2026-01-29 17:34:26.418000377 +0000 UTC m=+0.164587977 container attach cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_liskov, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:34:26 np0005601226 podman[272873]: 2026-01-29 17:34:26.418940263 +0000 UTC m=+0.165527833 container died cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:34:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Jan 29 12:34:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Jan 29 12:34:26 np0005601226 systemd[1]: var-lib-containers-storage-overlay-403ca0932cf36df145965ed32fb69d6c1f6eb93901c682cd02a72cc6646e0642-merged.mount: Deactivated successfully.
Jan 29 12:34:26 np0005601226 podman[272873]: 2026-01-29 17:34:26.483111045 +0000 UTC m=+0.229698615 container remove cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_liskov, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:34:26 np0005601226 systemd[1]: libpod-conmon-cb7d473a7ef337fe9bb2fede7696ab296acd5f87af024541d0110c576ea9748c.scope: Deactivated successfully.
Jan 29 12:34:26 np0005601226 podman[272914]: 2026-01-29 17:34:26.643369233 +0000 UTC m=+0.050623314 container create 51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:34:26 np0005601226 systemd[1]: Started libpod-conmon-51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc.scope.
Jan 29 12:34:26 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:34:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a0cbea5f1c2580a961d3c6be705602151e2551dba6aca76d84fc5f8e0d8b3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a0cbea5f1c2580a961d3c6be705602151e2551dba6aca76d84fc5f8e0d8b3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a0cbea5f1c2580a961d3c6be705602151e2551dba6aca76d84fc5f8e0d8b3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:26 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84a0cbea5f1c2580a961d3c6be705602151e2551dba6aca76d84fc5f8e0d8b3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:34:26 np0005601226 podman[272914]: 2026-01-29 17:34:26.618748296 +0000 UTC m=+0.026002417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:34:26 np0005601226 podman[272914]: 2026-01-29 17:34:26.728836183 +0000 UTC m=+0.136090294 container init 51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:34:26 np0005601226 podman[272914]: 2026-01-29 17:34:26.740132009 +0000 UTC m=+0.147386120 container start 51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 29 12:34:26 np0005601226 podman[272914]: 2026-01-29 17:34:26.747012356 +0000 UTC m=+0.154266467 container attach 51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.227 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:27 np0005601226 lvm[273007]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:34:27 np0005601226 lvm[273007]: VG ceph_vg0 finished
Jan 29 12:34:27 np0005601226 lvm[273009]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:34:27 np0005601226 lvm[273009]: VG ceph_vg1 finished
Jan 29 12:34:27 np0005601226 lvm[273010]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:34:27 np0005601226 lvm[273010]: VG ceph_vg2 finished
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.519 239460 DEBUG nova.compute.manager [req-3ae724f3-0d33-42ff-b314-abc7b9888004 req-bf92caea-f708-49a1-88ea-a510d4f26f1c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.520 239460 DEBUG oslo_concurrency.lockutils [req-3ae724f3-0d33-42ff-b314-abc7b9888004 req-bf92caea-f708-49a1-88ea-a510d4f26f1c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.521 239460 DEBUG oslo_concurrency.lockutils [req-3ae724f3-0d33-42ff-b314-abc7b9888004 req-bf92caea-f708-49a1-88ea-a510d4f26f1c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.522 239460 DEBUG oslo_concurrency.lockutils [req-3ae724f3-0d33-42ff-b314-abc7b9888004 req-bf92caea-f708-49a1-88ea-a510d4f26f1c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.522 239460 DEBUG nova.compute.manager [req-3ae724f3-0d33-42ff-b314-abc7b9888004 req-bf92caea-f708-49a1-88ea-a510d4f26f1c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] No waiting events found dispatching network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.522 239460 WARNING nova.compute.manager [req-3ae724f3-0d33-42ff-b314-abc7b9888004 req-bf92caea-f708-49a1-88ea-a510d4f26f1c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received unexpected event network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:34:27 np0005601226 laughing_darwin[272930]: {}
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.564 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:27 np0005601226 NetworkManager[49020]: <info>  [1769708067.5658] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Jan 29 12:34:27 np0005601226 NetworkManager[49020]: <info>  [1769708067.5675] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Jan 29 12:34:27 np0005601226 systemd[1]: libpod-51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc.scope: Deactivated successfully.
Jan 29 12:34:27 np0005601226 systemd[1]: libpod-51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc.scope: Consumed 1.128s CPU time.
Jan 29 12:34:27 np0005601226 conmon[272930]: conmon 51bf0188d9052b2e22b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc.scope/container/memory.events
Jan 29 12:34:27 np0005601226 podman[272914]: 2026-01-29 17:34:27.574026471 +0000 UTC m=+0.981280612 container died 51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.600 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:27Z|00245|binding|INFO|Releasing lport e64dae33-380b-46eb-9272-7f8c7bc07367 from this chassis (sb_readonly=0)
Jan 29 12:34:27 np0005601226 systemd[1]: var-lib-containers-storage-overlay-84a0cbea5f1c2580a961d3c6be705602151e2551dba6aca76d84fc5f8e0d8b3f-merged.mount: Deactivated successfully.
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.614 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:27 np0005601226 podman[272914]: 2026-01-29 17:34:27.637259796 +0000 UTC m=+1.044513867 container remove 51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 12:34:27 np0005601226 systemd[1]: libpod-conmon-51bf0188d9052b2e22b71b650c269ebabb1abdfb786d620989c198156ed3b2bc.scope: Deactivated successfully.
Jan 29 12:34:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:34:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:34:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:34:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.819 239460 DEBUG nova.compute.manager [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-changed-01cb5c50-a219-4070-87d9-991256087701 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.819 239460 DEBUG nova.compute.manager [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Refreshing instance network info cache due to event network-changed-01cb5c50-a219-4070-87d9-991256087701. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.819 239460 DEBUG oslo_concurrency.lockutils [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.820 239460 DEBUG oslo_concurrency.lockutils [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.820 239460 DEBUG nova.network.neutron [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Refreshing network info cache for port 01cb5c50-a219-4070-87d9-991256087701 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:34:27 np0005601226 nova_compute[239456]: 2026-01-29 17:34:27.919 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 2.7 MiB/s wr, 115 op/s
Jan 29 12:34:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:34:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:34:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164933204' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:29 np0005601226 nova_compute[239456]: 2026-01-29 17:34:29.109 239460 DEBUG nova.network.neutron [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updated VIF entry in instance network info cache for port 01cb5c50-a219-4070-87d9-991256087701. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:34:29 np0005601226 nova_compute[239456]: 2026-01-29 17:34:29.110 239460 DEBUG nova.network.neutron [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updating instance_info_cache with network_info: [{"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:34:29 np0005601226 nova_compute[239456]: 2026-01-29 17:34:29.140 239460 DEBUG oslo_concurrency.lockutils [req-47f73ee7-9239-4ec8-874d-173aaaf4d42f req-5e6bb118-b586-49d3-b071-05f356aeb97e 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:34:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Jan 29 12:34:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Jan 29 12:34:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Jan 29 12:34:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 134 op/s
Jan 29 12:34:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2754425276' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2754425276' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Jan 29 12:34:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Jan 29 12:34:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Jan 29 12:34:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e452 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 153 op/s
Jan 29 12:34:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4134350009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:32 np0005601226 nova_compute[239456]: 2026-01-29 17:34:32.267 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:32 np0005601226 nova_compute[239456]: 2026-01-29 17:34:32.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:32 np0005601226 nova_compute[239456]: 2026-01-29 17:34:32.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:34:32 np0005601226 nova_compute[239456]: 2026-01-29 17:34:32.921 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Jan 29 12:34:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Jan 29 12:34:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Jan 29 12:34:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Jan 29 12:34:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Jan 29 12:34:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Jan 29 12:34:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 134 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 KiB/s wr, 93 op/s
Jan 29 12:34:34 np0005601226 podman[273051]: 2026-01-29 17:34:34.934732848 +0000 UTC m=+0.100769075 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 29 12:34:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Jan 29 12:34:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Jan 29 12:34:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Jan 29 12:34:34 np0005601226 podman[273052]: 2026-01-29 17:34:34.98560962 +0000 UTC m=+0.151474883 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true)
Jan 29 12:34:35 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 29 12:34:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 135 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 324 KiB/s wr, 78 op/s
Jan 29 12:34:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e455 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324845039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.304 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.642 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.642 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.643 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.643 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.643 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:37 np0005601226 nova_compute[239456]: 2026-01-29 17:34:37.924 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Jan 29 12:34:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Jan 29 12:34:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Jan 29 12:34:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 135 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 327 KiB/s wr, 79 op/s
Jan 29 12:34:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:34:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824913772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.226 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:38 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:38Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:b7:e6 10.100.0.7
Jan 29 12:34:38 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:38Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:b7:e6 10.100.0.7
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.308 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.308 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.482 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.483 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4160MB free_disk=59.964481161907315GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.484 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.484 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.563 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.564 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.564 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:34:38 np0005601226 nova_compute[239456]: 2026-01-29 17:34:38.606 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:34:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Jan 29 12:34:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Jan 29 12:34:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Jan 29 12:34:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:34:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4172234179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:34:39 np0005601226 nova_compute[239456]: 2026-01-29 17:34:39.171 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:34:39 np0005601226 nova_compute[239456]: 2026-01-29 17:34:39.176 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:34:39 np0005601226 nova_compute[239456]: 2026-01-29 17:34:39.193 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:34:39 np0005601226 nova_compute[239456]: 2026-01-29 17:34:39.274 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:34:39 np0005601226 nova_compute[239456]: 2026-01-29 17:34:39.275 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Jan 29 12:34:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Jan 29 12:34:40 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 161 MiB data, 479 MiB used, 60 GiB / 60 GiB avail; 402 KiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 29 12:34:40 np0005601226 nova_compute[239456]: 2026-01-29 17:34:40.275 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:40 np0005601226 nova_compute[239456]: 2026-01-29 17:34:40.276 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:40 np0005601226 nova_compute[239456]: 2026-01-29 17:34:40.276 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:40.296 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:34:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:40.297 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:34:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:34:40.297 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:34:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2672464840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2672464840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:34:40
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.meta', 'backups', 'images', '.rgw.root']
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:34:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:34:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Jan 29 12:34:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Jan 29 12:34:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Jan 29 12:34:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 167 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.9 MiB/s wr, 320 op/s
Jan 29 12:34:42 np0005601226 nova_compute[239456]: 2026-01-29 17:34:42.336 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:42 np0005601226 nova_compute[239456]: 2026-01-29 17:34:42.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:42 np0005601226 nova_compute[239456]: 2026-01-29 17:34:42.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:34:42 np0005601226 nova_compute[239456]: 2026-01-29 17:34:42.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:34:42 np0005601226 nova_compute[239456]: 2026-01-29 17:34:42.927 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:43 np0005601226 nova_compute[239456]: 2026-01-29 17:34:43.137 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:34:43 np0005601226 nova_compute[239456]: 2026-01-29 17:34:43.137 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:34:43 np0005601226 nova_compute[239456]: 2026-01-29 17:34:43.138 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:34:43 np0005601226 nova_compute[239456]: 2026-01-29 17:34:43.138 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:34:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2578734327' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 167 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 700 KiB/s rd, 4.0 MiB/s wr, 220 op/s
Jan 29 12:34:44 np0005601226 nova_compute[239456]: 2026-01-29 17:34:44.176 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updating instance_info_cache with network_info: [{"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:34:44 np0005601226 nova_compute[239456]: 2026-01-29 17:34:44.193 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:34:44 np0005601226 nova_compute[239456]: 2026-01-29 17:34:44.193 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:34:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Jan 29 12:34:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Jan 29 12:34:44 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Jan 29 12:34:45 np0005601226 nova_compute[239456]: 2026-01-29 17:34:45.188 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Jan 29 12:34:45 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Jan 29 12:34:45 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Jan 29 12:34:45 np0005601226 nova_compute[239456]: 2026-01-29 17:34:45.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 167 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 1.0 MiB/s wr, 134 op/s
Jan 29 12:34:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:47 np0005601226 nova_compute[239456]: 2026-01-29 17:34:47.340 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:47 np0005601226 nova_compute[239456]: 2026-01-29 17:34:47.929 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 167 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 24 KiB/s wr, 49 op/s
Jan 29 12:34:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Jan 29 12:34:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Jan 29 12:34:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Jan 29 12:34:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3541635952' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3541635952' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.3 KiB/s wr, 110 op/s
Jan 29 12:34:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Jan 29 12:34:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Jan 29 12:34:51 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007613128289010769 of space, bias 1.0, pg target 0.22839384867032306 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00036747261378233684 of space, bias 1.0, pg target 0.11024178413470105 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 5.028587366645517e-06 of space, bias 1.0, pg target 0.001508576209993655 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669411911057442 of space, bias 1.0, pg target 0.20008235733172325 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4430712604069372e-06 of space, bias 4.0, pg target 0.0017316855124883247 quantized to 16 (current 16)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:34:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:34:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.4 KiB/s wr, 61 op/s
Jan 29 12:34:52 np0005601226 nova_compute[239456]: 2026-01-29 17:34:52.377 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:52 np0005601226 nova_compute[239456]: 2026-01-29 17:34:52.931 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2008609155' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:34:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Jan 29 12:34:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Jan 29 12:34:53 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Jan 29 12:34:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 9.3 KiB/s wr, 65 op/s
Jan 29 12:34:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Jan 29 12:34:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Jan 29 12:34:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Jan 29 12:34:55 np0005601226 nova_compute[239456]: 2026-01-29 17:34:55.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:34:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 10 KiB/s wr, 15 op/s
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137487324' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:34:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137487324' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:34:57 np0005601226 nova_compute[239456]: 2026-01-29 17:34:57.380 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:57 np0005601226 ovn_controller[145556]: 2026-01-29T17:34:57Z|00246|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Jan 29 12:34:57 np0005601226 nova_compute[239456]: 2026-01-29 17:34:57.933 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:34:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 9.5 KiB/s wr, 12 op/s
Jan 29 12:34:59 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:34:59 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2106490313' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:35:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.9 KiB/s wr, 64 op/s
Jan 29 12:35:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Jan 29 12:35:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Jan 29 12:35:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Jan 29 12:35:00 np0005601226 nova_compute[239456]: 2026-01-29 17:35:00.745 239460 DEBUG oslo_concurrency.lockutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:00 np0005601226 nova_compute[239456]: 2026-01-29 17:35:00.745 239460 DEBUG oslo_concurrency.lockutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:00 np0005601226 nova_compute[239456]: 2026-01-29 17:35:00.765 239460 DEBUG nova.objects.instance [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'flavor' on Instance uuid 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:35:00 np0005601226 nova_compute[239456]: 2026-01-29 17:35:00.801 239460 DEBUG oslo_concurrency.lockutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.042 239460 DEBUG oslo_concurrency.lockutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.043 239460 DEBUG oslo_concurrency.lockutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.043 239460 INFO nova.compute.manager [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Attaching volume 0bab98fc-1329-4dc9-871d-da625227dada to /dev/vdb#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.211 239460 DEBUG os_brick.utils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.212 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.225 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.226 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[700520a2-d8c5-439a-b30d-47fc75b42c77]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.227 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.235 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.236 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[0e86de0e-9473-4c19-9cbb-59f9da04251c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.237 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.247 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.248 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[576d26ce-c3a2-4d36-80d9-0310a4d9f7c7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.249 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[634847a0-cd79-4555-9921-cd64ecc5c58f]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.250 239460 DEBUG oslo_concurrency.processutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.274 239460 DEBUG oslo_concurrency.processutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.276 239460 DEBUG os_brick.initiator.connectors.lightos [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.276 239460 DEBUG os_brick.initiator.connectors.lightos [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.277 239460 DEBUG os_brick.initiator.connectors.lightos [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.277 239460 DEBUG os_brick.utils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:35:01 np0005601226 nova_compute[239456]: 2026-01-29 17:35:01.277 239460 DEBUG nova.virt.block_device [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updating existing volume attachment record: 3e1580ec-cd57-47fb-a8bd-b213b3384709 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3235456888' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:01 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3235456888' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:35:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3544310133' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:35:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 4.3 KiB/s wr, 60 op/s
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.234 239460 DEBUG os_brick.encryptors [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Using volume encryption metadata '{'encryption_key_id': 'b543b232-d0cd-4dae-ab77-d46e3500d76f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0bab98fc-1329-4dc9-871d-da625227dada', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0bab98fc-1329-4dc9-871d-da625227dada', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7', 'attached_at': '', 'detached_at': '', 'volume_id': '0bab98fc-1329-4dc9-871d-da625227dada', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.244 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.265 239460 DEBUG barbicanclient.v1.secrets [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.266 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.289 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.290 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.313 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.313 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.345 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.346 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.371 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.372 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.398 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.399 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.411 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.420 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.421 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.440 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.441 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.461 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.462 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.482 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.483 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Jan 29 12:35:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.506 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.507 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.526 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.526 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.550 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.552 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.572 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.573 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.595 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.596 239460 INFO barbicanclient.base [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/b543b232-d0cd-4dae-ab77-d46e3500d76f#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.615 239460 DEBUG barbicanclient.client [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.616 239460 DEBUG nova.virt.libvirt.host [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:35:02 np0005601226 nova_compute[239456]:    <volume>0bab98fc-1329-4dc9-871d-da625227dada</volume>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:35:02 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:35:02 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.634 239460 DEBUG nova.objects.instance [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'flavor' on Instance uuid 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.658 239460 DEBUG nova.virt.libvirt.driver [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Attempting to attach volume 0bab98fc-1329-4dc9-871d-da625227dada with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.661 239460 DEBUG nova.virt.libvirt.guest [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-0bab98fc-1329-4dc9-871d-da625227dada">
Jan 29 12:35:02 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:35:02 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  <serial>0bab98fc-1329-4dc9-871d-da625227dada</serial>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  <encryption format="luks">
Jan 29 12:35:02 np0005601226 nova_compute[239456]:    <secret type="passphrase" uuid="56bf19e8-b6ed-43bf-99ad-4b1ffd795e2c"/>
Jan 29 12:35:02 np0005601226 nova_compute[239456]:  </encryption>
Jan 29 12:35:02 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:35:02 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:35:02 np0005601226 nova_compute[239456]: 2026-01-29 17:35:02.935 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 5.3 KiB/s wr, 117 op/s
Jan 29 12:35:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Jan 29 12:35:04 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Jan 29 12:35:04 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Jan 29 12:35:04 np0005601226 nova_compute[239456]: 2026-01-29 17:35:04.970 239460 DEBUG nova.virt.libvirt.driver [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:35:04 np0005601226 nova_compute[239456]: 2026-01-29 17:35:04.971 239460 DEBUG nova.virt.libvirt.driver [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:35:04 np0005601226 nova_compute[239456]: 2026-01-29 17:35:04.971 239460 DEBUG nova.virt.libvirt.driver [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:35:04 np0005601226 nova_compute[239456]: 2026-01-29 17:35:04.971 239460 DEBUG nova.virt.libvirt.driver [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No VIF found with MAC fa:16:3e:f6:b7:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:35:05 np0005601226 nova_compute[239456]: 2026-01-29 17:35:05.150 239460 DEBUG oslo_concurrency.lockutils [None req-3531070e-25e6-4a93-917b-9a99ab5c3f42 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:05 np0005601226 nova_compute[239456]: 2026-01-29 17:35:05.891 239460 DEBUG oslo_concurrency.lockutils [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:05 np0005601226 nova_compute[239456]: 2026-01-29 17:35:05.892 239460 DEBUG oslo_concurrency.lockutils [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:05 np0005601226 nova_compute[239456]: 2026-01-29 17:35:05.910 239460 INFO nova.compute.manager [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Detaching volume 0bab98fc-1329-4dc9-871d-da625227dada#033[00m
Jan 29 12:35:05 np0005601226 podman[273168]: 2026-01-29 17:35:05.92268359 +0000 UTC m=+0.080411802 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 12:35:05 np0005601226 podman[273169]: 2026-01-29 17:35:05.987031027 +0000 UTC m=+0.140020841 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.027 239460 INFO nova.virt.block_device [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Attempting to driver detach volume 0bab98fc-1329-4dc9-871d-da625227dada from mountpoint /dev/vdb#033[00m
Jan 29 12:35:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 4.8 KiB/s wr, 155 op/s
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.127 239460 DEBUG os_brick.encryptors [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Using volume encryption metadata '{'encryption_key_id': 'b543b232-d0cd-4dae-ab77-d46e3500d76f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0bab98fc-1329-4dc9-871d-da625227dada', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0bab98fc-1329-4dc9-871d-da625227dada', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7', 'attached_at': '', 'detached_at': '', 'volume_id': '0bab98fc-1329-4dc9-871d-da625227dada', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.137 239460 DEBUG nova.virt.libvirt.driver [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Attempting to detach device vdb from instance 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.138 239460 DEBUG nova.virt.libvirt.guest [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-0bab98fc-1329-4dc9-871d-da625227dada">
Jan 29 12:35:06 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <serial>0bab98fc-1329-4dc9-871d-da625227dada</serial>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <encryption format="luks">
Jan 29 12:35:06 np0005601226 nova_compute[239456]:    <secret type="passphrase" uuid="56bf19e8-b6ed-43bf-99ad-4b1ffd795e2c"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  </encryption>
Jan 29 12:35:06 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:35:06 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.148 239460 INFO nova.virt.libvirt.driver [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully detached device vdb from instance 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 from the persistent domain config.#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.149 239460 DEBUG nova.virt.libvirt.driver [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.150 239460 DEBUG nova.virt.libvirt.guest [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-0bab98fc-1329-4dc9-871d-da625227dada">
Jan 29 12:35:06 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <serial>0bab98fc-1329-4dc9-871d-da625227dada</serial>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  <encryption format="luks">
Jan 29 12:35:06 np0005601226 nova_compute[239456]:    <secret type="passphrase" uuid="56bf19e8-b6ed-43bf-99ad-4b1ffd795e2c"/>
Jan 29 12:35:06 np0005601226 nova_compute[239456]:  </encryption>
Jan 29 12:35:06 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:35:06 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.269 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769708106.2692978, 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.272 239460 DEBUG nova.virt.libvirt.driver [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.275 239460 INFO nova.virt.libvirt.driver [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully detached device vdb from instance 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 from the live domain config.#033[00m
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.426 239460 DEBUG nova.objects.instance [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'flavor' on Instance uuid 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:35:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Jan 29 12:35:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Jan 29 12:35:06 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Jan 29 12:35:06 np0005601226 nova_compute[239456]: 2026-01-29 17:35:06.520 239460 DEBUG oslo_concurrency.lockutils [None req-4f0aca96-69ff-448e-98f1-c015f89e5129 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.414 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.420 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.421 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.421 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.422 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.422 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.424 239460 INFO nova.compute.manager [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Terminating instance#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.425 239460 DEBUG nova.compute.manager [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:35:07 np0005601226 kernel: tap01cb5c50-a2 (unregistering): left promiscuous mode
Jan 29 12:35:07 np0005601226 NetworkManager[49020]: <info>  [1769708107.4898] device (tap01cb5c50-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:35:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:07Z|00247|binding|INFO|Releasing lport 01cb5c50-a219-4070-87d9-991256087701 from this chassis (sb_readonly=0)
Jan 29 12:35:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:07Z|00248|binding|INFO|Setting lport 01cb5c50-a219-4070-87d9-991256087701 down in Southbound
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.496 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:07Z|00249|binding|INFO|Removing iface tap01cb5c50-a2 ovn-installed in OVS
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.499 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.506 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:b7:e6 10.100.0.7'], port_security=['fa:16:3e:f6:b7:e6 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9275d605-e314-4c83-a4e8-f4ba085f6358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0b979b40-7ceb-4e92-9df1-dc3b0e6034d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a7d5cc-cff2-487b-9e34-0c3106da1b90, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=01cb5c50-a219-4070-87d9-991256087701) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.509 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 01cb5c50-a219-4070-87d9-991256087701 in datapath 9275d605-e314-4c83-a4e8-f4ba085f6358 unbound from our chassis#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.513 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9275d605-e314-4c83-a4e8-f4ba085f6358, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.514 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3a13f36a-905d-4361-a457-56d2b7fdb7ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.515 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.515 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 namespace which is not needed anymore#033[00m
Jan 29 12:35:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Jan 29 12:35:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Jan 29 12:35:07 np0005601226 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 29 12:35:07 np0005601226 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 15.516s CPU time.
Jan 29 12:35:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Jan 29 12:35:07 np0005601226 systemd-machined[207561]: Machine qemu-26-instance-0000001a terminated.
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.646 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.651 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.662 239460 INFO nova.virt.libvirt.driver [-] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Instance destroyed successfully.#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.663 239460 DEBUG nova.objects.instance [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'resources' on Instance uuid 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.680 239460 DEBUG nova.virt.libvirt.vif [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:34:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-987208948',display_name='tempest-TestEncryptedCinderVolumes-server-987208948',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-987208948',id=26,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEiWyOmSEmMI4pwLqOKedCJq8UXqZek7QcQm1YGLuVaaKr+u7Y0eccysxWi4eVTnXO2KEU6T10OE9i6oP930f8wEjBWPLBpPePOuA4ghFCWhdIhwCWA42zHpIxVU2Gg7DQ==',key_name='tempest-keypair-186094087',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:34:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-uva102pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:34:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.680 239460 DEBUG nova.network.os_vif_util [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "01cb5c50-a219-4070-87d9-991256087701", "address": "fa:16:3e:f6:b7:e6", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01cb5c50-a2", "ovs_interfaceid": "01cb5c50-a219-4070-87d9-991256087701", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.682 239460 DEBUG nova.network.os_vif_util [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=01cb5c50-a219-4070-87d9-991256087701,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cb5c50-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.682 239460 DEBUG os_vif [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=01cb5c50-a219-4070-87d9-991256087701,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cb5c50-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:35:07 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[272769]: [NOTICE]   (272779) : haproxy version is 2.8.14-c23fe91
Jan 29 12:35:07 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[272769]: [NOTICE]   (272779) : path to executable is /usr/sbin/haproxy
Jan 29 12:35:07 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[272769]: [WARNING]  (272779) : Exiting Master process...
Jan 29 12:35:07 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[272769]: [ALERT]    (272779) : Current worker (272782) exited with code 143 (Terminated)
Jan 29 12:35:07 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[272769]: [WARNING]  (272779) : All workers exited. Exiting... (0)
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.686 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.687 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01cb5c50-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:07 np0005601226 systemd[1]: libpod-c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f.scope: Deactivated successfully.
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.690 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.693 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 podman[273240]: 2026-01-29 17:35:07.696401996 +0000 UTC m=+0.070554826 container died c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.698 239460 INFO os_vif [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=01cb5c50-a219-4070-87d9-991256087701,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01cb5c50-a2')#033[00m
Jan 29 12:35:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f-userdata-shm.mount: Deactivated successfully.
Jan 29 12:35:07 np0005601226 systemd[1]: var-lib-containers-storage-overlay-584787cdadd6b1288198af2294717317ba304b7c553dfbe5a65acc75536e999d-merged.mount: Deactivated successfully.
Jan 29 12:35:07 np0005601226 podman[273240]: 2026-01-29 17:35:07.773456567 +0000 UTC m=+0.147609407 container cleanup c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:35:07 np0005601226 systemd[1]: libpod-conmon-c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f.scope: Deactivated successfully.
Jan 29 12:35:07 np0005601226 podman[273296]: 2026-01-29 17:35:07.860251062 +0000 UTC m=+0.063680139 container remove c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.865 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3545c7e5-25f2-4783-97e3-870a9ed1882c]: (4, ('Thu Jan 29 05:35:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 (c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f)\nc054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f\nThu Jan 29 05:35:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 (c054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f)\nc054073f5718ddd5c5fa5bc4be7df52c934d12bfa3772876594457686e53993f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.869 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5183ab4c-dc10-487b-9b6f-27daf66420c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.872 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9275d605-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.874 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 kernel: tap9275d605-e0: left promiscuous mode
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.877 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.884 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[67fa3016-0dd7-42f2-93b1-a6c86fa9cd67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:07 np0005601226 nova_compute[239456]: 2026-01-29 17:35:07.887 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.903 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd0a715-4701-473e-8225-8422f36cb6cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.905 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[34b2ef25-263b-4668-9f0f-e25e1d167aec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.923 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f1834e-e49d-4942-a720-e21ab5f4f51e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541817, 'reachable_time': 24484, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273312, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:07 np0005601226 systemd[1]: run-netns-ovnmeta\x2d9275d605\x2de314\x2d4c83\x2da4e8\x2df4ba085f6358.mount: Deactivated successfully.
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.928 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:35:07 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:07.928 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[6a46a497-4e47-4c95-98cd-4acd90c3a4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 167 MiB data, 487 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 4.1 KiB/s wr, 150 op/s
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.343 239460 INFO nova.virt.libvirt.driver [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Deleting instance files /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_del#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.345 239460 INFO nova.virt.libvirt.driver [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Deletion of /var/lib/nova/instances/30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7_del complete#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.381 239460 DEBUG nova.compute.manager [req-05cd0045-e1cf-4fbd-945b-12b96815a3eb req-14eafa5c-6d0f-41c4-9f3d-5f16a22c558a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-vif-unplugged-01cb5c50-a219-4070-87d9-991256087701 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.381 239460 DEBUG oslo_concurrency.lockutils [req-05cd0045-e1cf-4fbd-945b-12b96815a3eb req-14eafa5c-6d0f-41c4-9f3d-5f16a22c558a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.382 239460 DEBUG oslo_concurrency.lockutils [req-05cd0045-e1cf-4fbd-945b-12b96815a3eb req-14eafa5c-6d0f-41c4-9f3d-5f16a22c558a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.382 239460 DEBUG oslo_concurrency.lockutils [req-05cd0045-e1cf-4fbd-945b-12b96815a3eb req-14eafa5c-6d0f-41c4-9f3d-5f16a22c558a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.382 239460 DEBUG nova.compute.manager [req-05cd0045-e1cf-4fbd-945b-12b96815a3eb req-14eafa5c-6d0f-41c4-9f3d-5f16a22c558a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] No waiting events found dispatching network-vif-unplugged-01cb5c50-a219-4070-87d9-991256087701 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.383 239460 DEBUG nova.compute.manager [req-05cd0045-e1cf-4fbd-945b-12b96815a3eb req-14eafa5c-6d0f-41c4-9f3d-5f16a22c558a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-vif-unplugged-01cb5c50-a219-4070-87d9-991256087701 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.391 239460 INFO nova.compute.manager [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.392 239460 DEBUG oslo.service.loopingcall [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.394 239460 DEBUG nova.compute.manager [-] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.394 239460 DEBUG nova.network.neutron [-] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:35:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:08.498 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:35:08 np0005601226 nova_compute[239456]: 2026-01-29 17:35:08.544 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:08 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:08.544 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:35:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Jan 29 12:35:08 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Jan 29 12:35:08 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Jan 29 12:35:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3212062139' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3212062139' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:09 np0005601226 nova_compute[239456]: 2026-01-29 17:35:09.917 239460 DEBUG nova.network.neutron [-] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:35:09 np0005601226 nova_compute[239456]: 2026-01-29 17:35:09.931 239460 INFO nova.compute.manager [-] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Took 1.54 seconds to deallocate network for instance.#033[00m
Jan 29 12:35:09 np0005601226 nova_compute[239456]: 2026-01-29 17:35:09.971 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:09 np0005601226 nova_compute[239456]: 2026-01-29 17:35:09.972 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.038 239460 DEBUG oslo_concurrency.processutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 118 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 9.4 KiB/s wr, 176 op/s
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.483 239460 DEBUG nova.compute.manager [req-7e6ccd81-4fda-4bad-967d-bd54334fdd33 req-fdd9b718-359b-4b12-8b3d-556571bdab56 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.484 239460 DEBUG oslo_concurrency.lockutils [req-7e6ccd81-4fda-4bad-967d-bd54334fdd33 req-fdd9b718-359b-4b12-8b3d-556571bdab56 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.485 239460 DEBUG oslo_concurrency.lockutils [req-7e6ccd81-4fda-4bad-967d-bd54334fdd33 req-fdd9b718-359b-4b12-8b3d-556571bdab56 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.485 239460 DEBUG oslo_concurrency.lockutils [req-7e6ccd81-4fda-4bad-967d-bd54334fdd33 req-fdd9b718-359b-4b12-8b3d-556571bdab56 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.485 239460 DEBUG nova.compute.manager [req-7e6ccd81-4fda-4bad-967d-bd54334fdd33 req-fdd9b718-359b-4b12-8b3d-556571bdab56 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] No waiting events found dispatching network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.486 239460 WARNING nova.compute.manager [req-7e6ccd81-4fda-4bad-967d-bd54334fdd33 req-fdd9b718-359b-4b12-8b3d-556571bdab56 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received unexpected event network-vif-plugged-01cb5c50-a219-4070-87d9-991256087701 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.486 239460 DEBUG nova.compute.manager [req-7e6ccd81-4fda-4bad-967d-bd54334fdd33 req-fdd9b718-359b-4b12-8b3d-556571bdab56 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Received event network-vif-deleted-01cb5c50-a219-4070-87d9-991256087701 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:35:10 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:35:10 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846498990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.586 239460 DEBUG oslo_concurrency.processutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:35:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:35:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:35:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:35:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:35:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.594 239460 DEBUG nova.compute.provider_tree [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.615 239460 DEBUG nova.scheduler.client.report [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.645 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.691 239460 INFO nova.scheduler.client.report [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Deleted allocations for instance 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7#033[00m
Jan 29 12:35:10 np0005601226 nova_compute[239456]: 2026-01-29 17:35:10.766 239460 DEBUG oslo_concurrency.lockutils [None req-4be16813-9b9f-49b8-b97a-28566d8ec85d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Jan 29 12:35:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Jan 29 12:35:11 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Jan 29 12:35:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 9.4 KiB/s wr, 171 op/s
Jan 29 12:35:12 np0005601226 nova_compute[239456]: 2026-01-29 17:35:12.450 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:12 np0005601226 nova_compute[239456]: 2026-01-29 17:35:12.691 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330267603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1330267603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 8.1 KiB/s wr, 147 op/s
Jan 29 12:35:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1378053732' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:14 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:14 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1378053732' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Jan 29 12:35:15 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Jan 29 12:35:15 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Jan 29 12:35:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 5.1 KiB/s wr, 151 op/s
Jan 29 12:35:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e475 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Jan 29 12:35:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Jan 29 12:35:16 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Jan 29 12:35:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:16.547 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:17 np0005601226 nova_compute[239456]: 2026-01-29 17:35:17.452 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:17 np0005601226 nova_compute[239456]: 2026-01-29 17:35:17.693 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3101665485' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:18 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3101665485' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 88 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.4 KiB/s wr, 45 op/s
Jan 29 12:35:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 88 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.6 KiB/s wr, 81 op/s
Jan 29 12:35:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Jan 29 12:35:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Jan 29 12:35:21 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Jan 29 12:35:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 88 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.3 KiB/s wr, 78 op/s
Jan 29 12:35:22 np0005601226 nova_compute[239456]: 2026-01-29 17:35:22.492 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:22 np0005601226 nova_compute[239456]: 2026-01-29 17:35:22.659 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769708107.6584122, 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:35:22 np0005601226 nova_compute[239456]: 2026-01-29 17:35:22.660 239460 INFO nova.compute.manager [-] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:35:22 np0005601226 nova_compute[239456]: 2026-01-29 17:35:22.690 239460 DEBUG nova.compute.manager [None req-c2e7021d-630b-4eb4-9ced-3a33ba3dc0e8 - - - - - -] [instance: 30d4bb92-42c1-4f9f-b8b1-cee8f4e362c7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:35:22 np0005601226 nova_compute[239456]: 2026-01-29 17:35:22.695 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Jan 29 12:35:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Jan 29 12:35:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Jan 29 12:35:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3825305706' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3825305706' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 88 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 KiB/s wr, 49 op/s
Jan 29 12:35:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.4 KiB/s wr, 101 op/s
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e478 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.515900) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708126515944, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2310, "num_deletes": 268, "total_data_size": 3476783, "memory_usage": 3538832, "flush_reason": "Manual Compaction"}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708126537362, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 3395561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34198, "largest_seqno": 36507, "table_properties": {"data_size": 3384464, "index_size": 7271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22892, "raw_average_key_size": 21, "raw_value_size": 3362387, "raw_average_value_size": 3101, "num_data_blocks": 314, "num_entries": 1084, "num_filter_entries": 1084, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769707960, "oldest_key_time": 1769707960, "file_creation_time": 1769708126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 21526 microseconds, and 9080 cpu microseconds.
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.537418) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 3395561 bytes OK
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.537444) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.540423) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.540447) EVENT_LOG_v1 {"time_micros": 1769708126540440, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.540470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 3466801, prev total WAL file size 3466801, number of live WAL files 2.
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.541571) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(3315KB)], [71(9287KB)]
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708126541645, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 12906214, "oldest_snapshot_seqno": -1}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6948 keys, 12762178 bytes, temperature: kUnknown
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708126638618, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 12762178, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12707792, "index_size": 35906, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 174647, "raw_average_key_size": 25, "raw_value_size": 12575258, "raw_average_value_size": 1809, "num_data_blocks": 1445, "num_entries": 6948, "num_filter_entries": 6948, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769708126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.639035) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 12762178 bytes
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.640730) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.9 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.1 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.6) write-amplify(3.8) OK, records in: 7490, records dropped: 542 output_compression: NoCompression
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.640761) EVENT_LOG_v1 {"time_micros": 1769708126640747, "job": 40, "event": "compaction_finished", "compaction_time_micros": 97091, "compaction_time_cpu_micros": 39914, "output_level": 6, "num_output_files": 1, "total_output_size": 12762178, "num_input_records": 7490, "num_output_records": 6948, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708126641757, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708126644028, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.541416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.644144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.644152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.644156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.644159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:26 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:26.644162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:27 np0005601226 nova_compute[239456]: 2026-01-29 17:35:27.494 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:27 np0005601226 nova_compute[239456]: 2026-01-29 17:35:27.697 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.9 KiB/s wr, 71 op/s
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Jan 29 12:35:28 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Jan 29 12:35:28 np0005601226 podman[273481]: 2026-01-29 17:35:28.843396659 +0000 UTC m=+0.044770616 container create 13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_nash, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:35:28 np0005601226 systemd[1]: Started libpod-conmon-13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa.scope.
Jan 29 12:35:28 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:35:28 np0005601226 podman[273481]: 2026-01-29 17:35:28.817805445 +0000 UTC m=+0.019179552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:35:28 np0005601226 podman[273481]: 2026-01-29 17:35:28.923491604 +0000 UTC m=+0.124865541 container init 13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 29 12:35:28 np0005601226 podman[273481]: 2026-01-29 17:35:28.928362645 +0000 UTC m=+0.129736562 container start 13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_nash, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 12:35:28 np0005601226 fervent_nash[273497]: 167 167
Jan 29 12:35:28 np0005601226 systemd[1]: libpod-13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa.scope: Deactivated successfully.
Jan 29 12:35:28 np0005601226 podman[273481]: 2026-01-29 17:35:28.934548733 +0000 UTC m=+0.135922650 container attach 13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_nash, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:35:28 np0005601226 podman[273481]: 2026-01-29 17:35:28.934823601 +0000 UTC m=+0.136197518 container died 13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_nash, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:35:28 np0005601226 systemd[1]: var-lib-containers-storage-overlay-56e3ce9d2dc0d5198cfeb626e211ccc05c320acafcd0af95b4aa12b14fe0c891-merged.mount: Deactivated successfully.
Jan 29 12:35:28 np0005601226 podman[273481]: 2026-01-29 17:35:28.98969387 +0000 UTC m=+0.191067817 container remove 13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 29 12:35:29 np0005601226 systemd[1]: libpod-conmon-13b0097b704f09438396c183970b192d77b2bde9621fe691d8876346a6cf78aa.scope: Deactivated successfully.
Jan 29 12:35:29 np0005601226 podman[273519]: 2026-01-29 17:35:29.150690449 +0000 UTC m=+0.047726506 container create 8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 12:35:29 np0005601226 systemd[1]: Started libpod-conmon-8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56.scope.
Jan 29 12:35:29 np0005601226 podman[273519]: 2026-01-29 17:35:29.125933267 +0000 UTC m=+0.022969384 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:35:29 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:35:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb7efd8b5678fd870eb97bd9feb502e2b4a8966f844a92fa8137e34a93a202e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb7efd8b5678fd870eb97bd9feb502e2b4a8966f844a92fa8137e34a93a202e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb7efd8b5678fd870eb97bd9feb502e2b4a8966f844a92fa8137e34a93a202e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb7efd8b5678fd870eb97bd9feb502e2b4a8966f844a92fa8137e34a93a202e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:29 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bb7efd8b5678fd870eb97bd9feb502e2b4a8966f844a92fa8137e34a93a202e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:29 np0005601226 podman[273519]: 2026-01-29 17:35:29.258229148 +0000 UTC m=+0.155265195 container init 8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:35:29 np0005601226 podman[273519]: 2026-01-29 17:35:29.265891605 +0000 UTC m=+0.162927662 container start 8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 29 12:35:29 np0005601226 podman[273519]: 2026-01-29 17:35:29.270747148 +0000 UTC m=+0.167783185 container attach 8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 29 12:35:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592677189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592677189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:29 np0005601226 wizardly_brattain[273535]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:35:29 np0005601226 wizardly_brattain[273535]: --> All data devices are unavailable
Jan 29 12:35:29 np0005601226 systemd[1]: libpod-8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56.scope: Deactivated successfully.
Jan 29 12:35:29 np0005601226 podman[273519]: 2026-01-29 17:35:29.690401926 +0000 UTC m=+0.587437963 container died 8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:35:29 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9bb7efd8b5678fd870eb97bd9feb502e2b4a8966f844a92fa8137e34a93a202e-merged.mount: Deactivated successfully.
Jan 29 12:35:29 np0005601226 podman[273519]: 2026-01-29 17:35:29.745877042 +0000 UTC m=+0.642913109 container remove 8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_brattain, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:35:29 np0005601226 systemd[1]: libpod-conmon-8d1486f9ce58df976502aad1f913b2565958b5b8268bc369e68d97ac51ff1b56.scope: Deactivated successfully.
Jan 29 12:35:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 40 KiB/s wr, 94 op/s
Jan 29 12:35:30 np0005601226 podman[273632]: 2026-01-29 17:35:30.178692167 +0000 UTC m=+0.051098827 container create ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:35:30 np0005601226 systemd[1]: Started libpod-conmon-ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723.scope.
Jan 29 12:35:30 np0005601226 podman[273632]: 2026-01-29 17:35:30.151120329 +0000 UTC m=+0.023527039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:35:30 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:35:30 np0005601226 podman[273632]: 2026-01-29 17:35:30.27166614 +0000 UTC m=+0.144072840 container init ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:35:30 np0005601226 podman[273632]: 2026-01-29 17:35:30.279065082 +0000 UTC m=+0.151471742 container start ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:35:30 np0005601226 podman[273632]: 2026-01-29 17:35:30.283992175 +0000 UTC m=+0.156398815 container attach ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 29 12:35:30 np0005601226 objective_solomon[273648]: 167 167
Jan 29 12:35:30 np0005601226 podman[273632]: 2026-01-29 17:35:30.285716162 +0000 UTC m=+0.158122822 container died ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:35:30 np0005601226 systemd[1]: libpod-ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723.scope: Deactivated successfully.
Jan 29 12:35:30 np0005601226 systemd[1]: var-lib-containers-storage-overlay-85a2c9fd057db430381d105036795cc0ef93d1e4562fddfffdbf3d481590a4c6-merged.mount: Deactivated successfully.
Jan 29 12:35:30 np0005601226 podman[273632]: 2026-01-29 17:35:30.349536044 +0000 UTC m=+0.221942704 container remove ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_solomon, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:35:30 np0005601226 systemd[1]: libpod-conmon-ca97571939e4f92dbd8e84b4fb3344573f092cf4f92b1d89a1246f1224bdd723.scope: Deactivated successfully.
Jan 29 12:35:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2308631078' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2308631078' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:30 np0005601226 podman[273671]: 2026-01-29 17:35:30.555700599 +0000 UTC m=+0.068457129 container create bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:35:30 np0005601226 systemd[1]: Started libpod-conmon-bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f.scope.
Jan 29 12:35:30 np0005601226 podman[273671]: 2026-01-29 17:35:30.524694107 +0000 UTC m=+0.037450697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:35:30 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:35:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c2252f9db4815b2657adb87718f698dc5c1e0a0c71c5f07060b0bf786ca1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c2252f9db4815b2657adb87718f698dc5c1e0a0c71c5f07060b0bf786ca1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c2252f9db4815b2657adb87718f698dc5c1e0a0c71c5f07060b0bf786ca1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:30 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89c2252f9db4815b2657adb87718f698dc5c1e0a0c71c5f07060b0bf786ca1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:30 np0005601226 podman[273671]: 2026-01-29 17:35:30.664565303 +0000 UTC m=+0.177321883 container init bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 12:35:30 np0005601226 podman[273671]: 2026-01-29 17:35:30.676426836 +0000 UTC m=+0.189183336 container start bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 29 12:35:30 np0005601226 podman[273671]: 2026-01-29 17:35:30.679517529 +0000 UTC m=+0.192274029 container attach bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]: {
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:    "0": [
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:        {
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "devices": [
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "/dev/loop3"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            ],
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_name": "ceph_lv0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_size": "21470642176",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "name": "ceph_lv0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "tags": {
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cluster_name": "ceph",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.crush_device_class": "",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.encrypted": "0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.objectstore": "bluestore",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osd_id": "0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.type": "block",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.vdo": "0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.with_tpm": "0"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            },
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "type": "block",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "vg_name": "ceph_vg0"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:        }
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:    ],
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:    "1": [
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:        {
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "devices": [
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "/dev/loop4"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            ],
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_name": "ceph_lv1",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_size": "21470642176",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "name": "ceph_lv1",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "tags": {
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cluster_name": "ceph",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.crush_device_class": "",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.encrypted": "0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.objectstore": "bluestore",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osd_id": "1",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.type": "block",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.vdo": "0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.with_tpm": "0"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            },
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "type": "block",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "vg_name": "ceph_vg1"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:        }
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:    ],
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:    "2": [
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:        {
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "devices": [
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "/dev/loop5"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            ],
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_name": "ceph_lv2",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_size": "21470642176",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "name": "ceph_lv2",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "tags": {
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.cluster_name": "ceph",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.crush_device_class": "",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.encrypted": "0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.objectstore": "bluestore",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osd_id": "2",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.type": "block",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.vdo": "0",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:                "ceph.with_tpm": "0"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            },
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "type": "block",
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:            "vg_name": "ceph_vg2"
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:        }
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]:    ]
Jan 29 12:35:30 np0005601226 blissful_brattain[273687]: }
Jan 29 12:35:30 np0005601226 systemd[1]: libpod-bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f.scope: Deactivated successfully.
Jan 29 12:35:30 np0005601226 podman[273671]: 2026-01-29 17:35:30.992497833 +0000 UTC m=+0.505254343 container died bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 29 12:35:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-d89c2252f9db4815b2657adb87718f698dc5c1e0a0c71c5f07060b0bf786ca1b-merged.mount: Deactivated successfully.
Jan 29 12:35:31 np0005601226 podman[273671]: 2026-01-29 17:35:31.036759664 +0000 UTC m=+0.549516194 container remove bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_brattain, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:35:31 np0005601226 systemd[1]: libpod-conmon-bfb92a289702f55da68608aec940215233f4b5ad7721613ad55e967cd91a0a2f.scope: Deactivated successfully.
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Jan 29 12:35:31 np0005601226 podman[273771]: 2026-01-29 17:35:31.470165706 +0000 UTC m=+0.023189080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:35:31 np0005601226 podman[273771]: 2026-01-29 17:35:31.572466193 +0000 UTC m=+0.125489577 container create 4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.591420) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708131591524, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 334, "num_deletes": 252, "total_data_size": 116133, "memory_usage": 122512, "flush_reason": "Manual Compaction"}
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708131595273, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 114782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36508, "largest_seqno": 36841, "table_properties": {"data_size": 112670, "index_size": 276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5769, "raw_average_key_size": 19, "raw_value_size": 108231, "raw_average_value_size": 359, "num_data_blocks": 13, "num_entries": 301, "num_filter_entries": 301, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769708127, "oldest_key_time": 1769708127, "file_creation_time": 1769708131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 3979 microseconds, and 1028 cpu microseconds.
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.595393) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 114782 bytes OK
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.595444) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.599802) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.599831) EVENT_LOG_v1 {"time_micros": 1769708131599822, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.599854) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 113783, prev total WAL file size 115304, number of live WAL files 2.
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.600406) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(112KB)], [74(12MB)]
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708131600445, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12876960, "oldest_snapshot_seqno": -1}
Jan 29 12:35:31 np0005601226 systemd[1]: Started libpod-conmon-4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51.scope.
Jan 29 12:35:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6732 keys, 11073738 bytes, temperature: kUnknown
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708131664389, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11073738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11022873, "index_size": 32961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 170919, "raw_average_key_size": 25, "raw_value_size": 10896057, "raw_average_value_size": 1618, "num_data_blocks": 1310, "num_entries": 6732, "num_filter_entries": 6732, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769708131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:35:31 np0005601226 podman[273771]: 2026-01-29 17:35:31.670529794 +0000 UTC m=+0.223553148 container init 4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.665103) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11073738 bytes
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.677396) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.3 rd, 172.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(208.7) write-amplify(96.5) OK, records in: 7249, records dropped: 517 output_compression: NoCompression
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.677430) EVENT_LOG_v1 {"time_micros": 1769708131677416, "job": 42, "event": "compaction_finished", "compaction_time_micros": 64278, "compaction_time_cpu_micros": 23860, "output_level": 6, "num_output_files": 1, "total_output_size": 11073738, "num_input_records": 7249, "num_output_records": 6732, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708131677628, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 29 12:35:31 np0005601226 podman[273771]: 2026-01-29 17:35:31.677817942 +0000 UTC m=+0.230841296 container start 4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708131678922, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.600313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.679020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.679028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.679031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.679034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:31 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:35:31.679037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:35:31 np0005601226 sharp_liskov[273788]: 167 167
Jan 29 12:35:31 np0005601226 systemd[1]: libpod-4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51.scope: Deactivated successfully.
Jan 29 12:35:31 np0005601226 podman[273771]: 2026-01-29 17:35:31.691248486 +0000 UTC m=+0.244271860 container attach 4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_liskov, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:35:31 np0005601226 podman[273771]: 2026-01-29 17:35:31.691580935 +0000 UTC m=+0.244604289 container died 4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 12:35:31 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f403638a00b87fbf6ea8234948b85ecbd653bdc3f4e46c4d7c5c112e3f193f2b-merged.mount: Deactivated successfully.
Jan 29 12:35:31 np0005601226 podman[273771]: 2026-01-29 17:35:31.758341426 +0000 UTC m=+0.311364780 container remove 4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:35:31 np0005601226 systemd[1]: libpod-conmon-4d8c6c1b3fcc56fd35b30f4adeb9d2d478f4ec417887904d6292b5ae40b87a51.scope: Deactivated successfully.
Jan 29 12:35:31 np0005601226 podman[273813]: 2026-01-29 17:35:31.910793074 +0000 UTC m=+0.039897503 container create 5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:35:31 np0005601226 systemd[1]: Started libpod-conmon-5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3.scope.
Jan 29 12:35:31 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:35:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a163b64fe9e81fc40991173dada56964445e1ac803a46d4e22defe744a272b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a163b64fe9e81fc40991173dada56964445e1ac803a46d4e22defe744a272b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a163b64fe9e81fc40991173dada56964445e1ac803a46d4e22defe744a272b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:31 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3a163b64fe9e81fc40991173dada56964445e1ac803a46d4e22defe744a272b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:31 np0005601226 podman[273813]: 2026-01-29 17:35:31.980831854 +0000 UTC m=+0.109936333 container init 5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:35:31 np0005601226 podman[273813]: 2026-01-29 17:35:31.894026769 +0000 UTC m=+0.023131218 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:35:31 np0005601226 podman[273813]: 2026-01-29 17:35:31.994023913 +0000 UTC m=+0.123128372 container start 5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:35:31 np0005601226 podman[273813]: 2026-01-29 17:35:31.997797495 +0000 UTC m=+0.126901954 container attach 5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:35:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 47 KiB/s wr, 58 op/s
Jan 29 12:35:32 np0005601226 nova_compute[239456]: 2026-01-29 17:35:32.545 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Jan 29 12:35:32 np0005601226 nova_compute[239456]: 2026-01-29 17:35:32.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:32 np0005601226 nova_compute[239456]: 2026-01-29 17:35:32.603 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:35:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Jan 29 12:35:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Jan 29 12:35:32 np0005601226 lvm[273907]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:35:32 np0005601226 lvm[273907]: VG ceph_vg0 finished
Jan 29 12:35:32 np0005601226 lvm[273908]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:35:32 np0005601226 lvm[273908]: VG ceph_vg1 finished
Jan 29 12:35:32 np0005601226 lvm[273910]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:35:32 np0005601226 lvm[273910]: VG ceph_vg2 finished
Jan 29 12:35:32 np0005601226 nova_compute[239456]: 2026-01-29 17:35:32.697 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:32 np0005601226 amazing_jepsen[273829]: {}
Jan 29 12:35:32 np0005601226 systemd[1]: libpod-5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3.scope: Deactivated successfully.
Jan 29 12:35:32 np0005601226 systemd[1]: libpod-5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3.scope: Consumed 1.098s CPU time.
Jan 29 12:35:32 np0005601226 podman[273813]: 2026-01-29 17:35:32.763732191 +0000 UTC m=+0.892836650 container died 5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:35:32 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f3a163b64fe9e81fc40991173dada56964445e1ac803a46d4e22defe744a272b-merged.mount: Deactivated successfully.
Jan 29 12:35:32 np0005601226 podman[273813]: 2026-01-29 17:35:32.851929005 +0000 UTC m=+0.981033454 container remove 5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 12:35:32 np0005601226 systemd[1]: libpod-conmon-5fc08bc8e628fc417718b2e7c3c92e1b9cf82b64d070858f38ca50d1370082f3.scope: Deactivated successfully.
Jan 29 12:35:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:35:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:35:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:35:32 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:35:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:35:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:35:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 47 KiB/s wr, 58 op/s
Jan 29 12:35:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Jan 29 12:35:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Jan 29 12:35:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Jan 29 12:35:35 np0005601226 nova_compute[239456]: 2026-01-29 17:35:35.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2694018780' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2694018780' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Jan 29 12:35:36 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Jan 29 12:35:36 np0005601226 podman[273953]: 2026-01-29 17:35:36.935657031 +0000 UTC m=+0.081687508 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 29 12:35:36 np0005601226 podman[273954]: 2026-01-29 17:35:36.981874755 +0000 UTC m=+0.127991294 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.547 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.699 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.757 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.758 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.758 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.758 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:35:37 np0005601226 nova_compute[239456]: 2026-01-29 17:35:37.759 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Jan 29 12:35:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:35:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940145356' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.296 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.529 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.531 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4304MB free_disk=59.988158259540796GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.531 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.532 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.605 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.605 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:35:38 np0005601226 nova_compute[239456]: 2026-01-29 17:35:38.621 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Jan 29 12:35:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Jan 29 12:35:38 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Jan 29 12:35:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:35:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111409306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:35:39 np0005601226 nova_compute[239456]: 2026-01-29 17:35:39.173 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:39 np0005601226 nova_compute[239456]: 2026-01-29 17:35:39.181 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:35:39 np0005601226 nova_compute[239456]: 2026-01-29 17:35:39.198 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:35:39 np0005601226 nova_compute[239456]: 2026-01-29 17:35:39.225 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:35:39 np0005601226 nova_compute[239456]: 2026-01-29 17:35:39.226 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:39 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:39Z|00250|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 29 12:35:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Jan 29 12:35:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Jan 29 12:35:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.5 KiB/s wr, 40 op/s
Jan 29 12:35:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:40.297 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:40.298 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:40.298 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:35:40
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', 'images', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:35:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/299429031' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/299429031' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:41 np0005601226 nova_compute[239456]: 2026-01-29 17:35:41.227 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:41 np0005601226 nova_compute[239456]: 2026-01-29 17:35:41.228 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:41 np0005601226 nova_compute[239456]: 2026-01-29 17:35:41.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Jan 29 12:35:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Jan 29 12:35:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 88 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.5 KiB/s wr, 41 op/s
Jan 29 12:35:42 np0005601226 nova_compute[239456]: 2026-01-29 17:35:42.599 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:42 np0005601226 nova_compute[239456]: 2026-01-29 17:35:42.700 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 136 MiB data, 467 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 7.7 MiB/s wr, 142 op/s
Jan 29 12:35:44 np0005601226 nova_compute[239456]: 2026-01-29 17:35:44.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:44 np0005601226 nova_compute[239456]: 2026-01-29 17:35:44.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:44 np0005601226 nova_compute[239456]: 2026-01-29 17:35:44.603 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:35:44 np0005601226 nova_compute[239456]: 2026-01-29 17:35:44.603 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:35:44 np0005601226 nova_compute[239456]: 2026-01-29 17:35:44.619 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:35:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 180 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 13 MiB/s wr, 135 op/s
Jan 29 12:35:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Jan 29 12:35:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Jan 29 12:35:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Jan 29 12:35:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Jan 29 12:35:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Jan 29 12:35:46 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Jan 29 12:35:47 np0005601226 nova_compute[239456]: 2026-01-29 17:35:47.601 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:47 np0005601226 nova_compute[239456]: 2026-01-29 17:35:47.602 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:47 np0005601226 nova_compute[239456]: 2026-01-29 17:35:47.701 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 202 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 18 MiB/s wr, 137 op/s
Jan 29 12:35:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Jan 29 12:35:48 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Jan 29 12:35:48 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.511 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.512 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.554 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:35:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/917431613' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/917431613' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.657 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.657 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.675 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.675 239460 INFO nova.compute.claims [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:35:49 np0005601226 nova_compute[239456]: 2026-01-29 17:35:49.804 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 202 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 11 MiB/s wr, 54 op/s
Jan 29 12:35:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:35:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655928241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.364 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.370 239460 DEBUG nova.compute.provider_tree [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.390 239460 DEBUG nova.scheduler.client.report [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.415 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.415 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.472 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.473 239460 DEBUG nova.network.neutron [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.495 239460 INFO nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.515 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.561 239460 INFO nova.virt.block_device [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Booting with volume 1cb2bcce-3ecf-415b-a1c5-b24a29f4378f at /dev/vda#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.676 239460 DEBUG nova.policy [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90bbb3ba09534f74aedaab7650ed5ba4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.732 239460 DEBUG os_brick.utils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.734 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.746 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.747 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac7452e-4acf-4dad-9342-22bf615ed780]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.747 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.754 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.754 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[e740cc33-cd6b-46bb-a60d-78883bf44f29]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.756 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.764 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.765 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[8094f091-1891-49d4-804e-1c1d4d0a3b12]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.766 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[49d918f1-5e94-4468-86f4-dfb5591d72bc]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.766 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.788 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.789 239460 DEBUG os_brick.initiator.connectors.lightos [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.790 239460 DEBUG os_brick.initiator.connectors.lightos [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.790 239460 DEBUG os_brick.initiator.connectors.lightos [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.790 239460 DEBUG os_brick.utils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] <== get_connector_properties: return (57ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:35:50 np0005601226 nova_compute[239456]: 2026-01-29 17:35:50.790 239460 DEBUG nova.virt.block_device [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Updating existing volume attachment record: 7ab430e2-5b45-406e-9630-ed493050b9c9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.352 239460 DEBUG nova.network.neutron [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Successfully created port: 6ed79400-df99-4990-bcf9-2b653ae874ce _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:35:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:35:51 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1999255158' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.0515921060450727e-06 of space, bias 1.0, pg target 0.0006154776318135218 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021966875177765834 of space, bias 1.0, pg target 0.6590062553329751 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.695155327616783e-06 of space, bias 1.0, pg target 0.001408546598285035 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669461591297926 of space, bias 1.0, pg target 0.20008384773893778 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4512995502372315e-06 of space, bias 4.0, pg target 0.0017415594602846777 quantized to 16 (current 16)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:35:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:35:51 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.816 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.818 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.819 239460 INFO nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Creating image(s)#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.820 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.820 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Ensure instance console log exists: /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.821 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.821 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:51 np0005601226 nova_compute[239456]: 2026-01-29 17:35:51.822 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 202 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 3.7 MiB/s wr, 26 op/s
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.174 239460 DEBUG nova.network.neutron [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Successfully updated port: 6ed79400-df99-4990-bcf9-2b653ae874ce _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.191 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.191 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquired lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.192 239460 DEBUG nova.network.neutron [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:35:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Jan 29 12:35:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.262 239460 DEBUG nova.compute.manager [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-changed-6ed79400-df99-4990-bcf9-2b653ae874ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.263 239460 DEBUG nova.compute.manager [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Refreshing instance network info cache due to event network-changed-6ed79400-df99-4990-bcf9-2b653ae874ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.263 239460 DEBUG oslo_concurrency.lockutils [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:35:52 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.350 239460 DEBUG nova.network.neutron [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.647 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:52 np0005601226 nova_compute[239456]: 2026-01-29 17:35:52.703 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.053 239460 DEBUG nova.network.neutron [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Updating instance_info_cache with network_info: [{"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.073 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Releasing lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.073 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Instance network_info: |[{"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.074 239460 DEBUG oslo_concurrency.lockutils [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.074 239460 DEBUG nova.network.neutron [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Refreshing network info cache for port 6ed79400-df99-4990-bcf9-2b653ae874ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.079 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Start _get_guest_xml network_info=[{"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '7ab430e2-5b45-406e-9630-ed493050b9c9', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1cb2bcce-3ecf-415b-a1c5-b24a29f4378f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1cb2bcce-3ecf-415b-a1c5-b24a29f4378f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1c950583-8182-4826-a70d-227f3e018779', 'attached_at': '', 'detached_at': '', 'volume_id': '1cb2bcce-3ecf-415b-a1c5-b24a29f4378f', 'serial': '1cb2bcce-3ecf-415b-a1c5-b24a29f4378f'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.086 239460 WARNING nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.093 239460 DEBUG nova.virt.libvirt.host [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.093 239460 DEBUG nova.virt.libvirt.host [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.096 239460 DEBUG nova.virt.libvirt.host [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.097 239460 DEBUG nova.virt.libvirt.host [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.098 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.098 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.098 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.099 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.099 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.099 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.099 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.099 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.099 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.100 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.100 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.100 239460 DEBUG nova.virt.hardware [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.119 239460 DEBUG nova.storage.rbd_utils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 1c950583-8182-4826-a70d-227f3e018779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.123 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:35:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814385395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.659 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.773 239460 DEBUG os_brick.encryptors [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Using volume encryption metadata '{'encryption_key_id': '9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1cb2bcce-3ecf-415b-a1c5-b24a29f4378f', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1cb2bcce-3ecf-415b-a1c5-b24a29f4378f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1c950583-8182-4826-a70d-227f3e018779', 'attached_at': '', 'detached_at': '', 'volume_id': '1cb2bcce-3ecf-415b-a1c5-b24a29f4378f', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.776 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.796 239460 DEBUG barbicanclient.v1.secrets [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.796 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.822 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.823 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.855 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.856 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.882 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.883 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.920 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:53 np0005601226 nova_compute[239456]: 2026-01-29 17:35:53.921 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 202 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.0 MiB/s wr, 75 op/s
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.155 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.156 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.184 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.186 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.213 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.213 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.234 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.234 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.259 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.259 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.278 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.278 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Jan 29 12:35:54 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Jan 29 12:35:54 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.311 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.312 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.339 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.340 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.368 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.369 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.403 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.404 239460 INFO barbicanclient.base [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/9ecf0f91-ece5-4fd4-b575-c253cbf1c9f5#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.421 239460 DEBUG barbicanclient.client [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.422 239460 DEBUG nova.virt.libvirt.host [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <volume>1cb2bcce-3ecf-415b-a1c5-b24a29f4378f</volume>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:35:54 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:35:54 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.461 239460 DEBUG nova.virt.libvirt.vif [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:35:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1434218168',display_name='tempest-TestEncryptedCinderVolumes-server-1434218168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1434218168',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOWwFB+v1u4PWXh8S+wauEigc6l/mXsWkHrP4yUMWKETbc8s3+sIpwLb84UDBVfP1J1Q2qVa0piFJnATY3aZmmPNeYGKVTqN4zZ540CODMnFZL0G2v6B5/DzZoCBhdagGw==',key_name='tempest-TestEncryptedCinderVolumes-1911257081',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-bonbxfmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:35:50Z,user_data=None,user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=1c950583-8182-4826-a70d-227f3e018779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.462 239460 DEBUG nova.network.os_vif_util [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.463 239460 DEBUG nova.network.os_vif_util [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:82:76,bridge_name='br-int',has_traffic_filtering=True,id=6ed79400-df99-4990-bcf9-2b653ae874ce,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed79400-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.466 239460 DEBUG nova.objects.instance [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c950583-8182-4826-a70d-227f3e018779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.481 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <uuid>1c950583-8182-4826-a70d-227f3e018779</uuid>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <name>instance-0000001b</name>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-1434218168</nova:name>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:35:53</nova:creationTime>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:user uuid="90bbb3ba09534f74aedaab7650ed5ba4">tempest-TestEncryptedCinderVolumes-595928636-project-member</nova:user>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:project uuid="9c3315c8b4c543a38f07ec0c509f03c1">tempest-TestEncryptedCinderVolumes-595928636</nova:project>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <nova:port uuid="6ed79400-df99-4990-bcf9-2b653ae874ce">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <entry name="serial">1c950583-8182-4826-a70d-227f3e018779</entry>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <entry name="uuid">1c950583-8182-4826-a70d-227f3e018779</entry>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/1c950583-8182-4826-a70d-227f3e018779_disk.config">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-1cb2bcce-3ecf-415b-a1c5-b24a29f4378f">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <serial>1cb2bcce-3ecf-415b-a1c5-b24a29f4378f</serial>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <encryption format="luks">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:        <secret type="passphrase" uuid="167076d5-2504-4306-b518-7545267e05e0"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      </encryption>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:84:82:76"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <target dev="tap6ed79400-df"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/console.log" append="off"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:35:54 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:35:54 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:35:54 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:35:54 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.483 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Preparing to wait for external event network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.484 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.484 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.484 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.485 239460 DEBUG nova.virt.libvirt.vif [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:35:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1434218168',display_name='tempest-TestEncryptedCinderVolumes-server-1434218168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1434218168',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOWwFB+v1u4PWXh8S+wauEigc6l/mXsWkHrP4yUMWKETbc8s3+sIpwLb84UDBVfP1J1Q2qVa0piFJnATY3aZmmPNeYGKVTqN4zZ540CODMnFZL0G2v6B5/DzZoCBhdagGw==',key_name='tempest-TestEncryptedCinderVolumes-1911257081',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-bonbxfmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:35:50Z,user_data=None,user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=1c950583-8182-4826-a70d-227f3e018779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.486 239460 DEBUG nova.network.os_vif_util [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.487 239460 DEBUG nova.network.os_vif_util [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:82:76,bridge_name='br-int',has_traffic_filtering=True,id=6ed79400-df99-4990-bcf9-2b653ae874ce,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed79400-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.487 239460 DEBUG os_vif [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:82:76,bridge_name='br-int',has_traffic_filtering=True,id=6ed79400-df99-4990-bcf9-2b653ae874ce,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed79400-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.488 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.489 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.489 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.493 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.493 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ed79400-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.494 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ed79400-df, col_values=(('external_ids', {'iface-id': '6ed79400-df99-4990-bcf9-2b653ae874ce', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:82:76', 'vm-uuid': '1c950583-8182-4826-a70d-227f3e018779'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.496 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:54 np0005601226 NetworkManager[49020]: <info>  [1769708154.4975] manager: (tap6ed79400-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.499 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.505 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.506 239460 INFO os_vif [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:82:76,bridge_name='br-int',has_traffic_filtering=True,id=6ed79400-df99-4990-bcf9-2b653ae874ce,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed79400-df')#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.569 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.570 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.570 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No VIF found with MAC fa:16:3e:84:82:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.571 239460 INFO nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Using config drive#033[00m
Jan 29 12:35:54 np0005601226 nova_compute[239456]: 2026-01-29 17:35:54.602 239460 DEBUG nova.storage.rbd_utils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 1c950583-8182-4826-a70d-227f3e018779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.380 239460 INFO nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Creating config drive at /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/disk.config#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.384 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptj0b1xeo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.454 239460 DEBUG nova.network.neutron [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Updated VIF entry in instance network info cache for port 6ed79400-df99-4990-bcf9-2b653ae874ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.455 239460 DEBUG nova.network.neutron [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Updating instance_info_cache with network_info: [{"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.483 239460 DEBUG oslo_concurrency.lockutils [req-c0a1e5e8-895a-4a7c-9e7d-b70b66c1b33c req-0c0c7f86-8965-4540-a086-52b87a3fc3a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.512 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptj0b1xeo" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.534 239460 DEBUG nova.storage.rbd_utils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 1c950583-8182-4826-a70d-227f3e018779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.538 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/disk.config 1c950583-8182-4826-a70d-227f3e018779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.689 239460 DEBUG oslo_concurrency.processutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/disk.config 1c950583-8182-4826-a70d-227f3e018779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.690 239460 INFO nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Deleting local config drive /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779/disk.config because it was imported into RBD.#033[00m
Jan 29 12:35:55 np0005601226 kernel: tap6ed79400-df: entered promiscuous mode
Jan 29 12:35:55 np0005601226 NetworkManager[49020]: <info>  [1769708155.7386] manager: (tap6ed79400-df): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Jan 29 12:35:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:55Z|00251|binding|INFO|Claiming lport 6ed79400-df99-4990-bcf9-2b653ae874ce for this chassis.
Jan 29 12:35:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:55Z|00252|binding|INFO|6ed79400-df99-4990-bcf9-2b653ae874ce: Claiming fa:16:3e:84:82:76 10.100.0.3
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.739 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.746 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:82:76 10.100.0.3'], port_security=['fa:16:3e:84:82:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1c950583-8182-4826-a70d-227f3e018779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9275d605-e314-4c83-a4e8-f4ba085f6358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27af6b88-dd81-456a-89a5-a6e9b903fd48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a7d5cc-cff2-487b-9e34-0c3106da1b90, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=6ed79400-df99-4990-bcf9-2b653ae874ce) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.749 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed79400-df99-4990-bcf9-2b653ae874ce in datapath 9275d605-e314-4c83-a4e8-f4ba085f6358 bound to our chassis#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.751 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9275d605-e314-4c83-a4e8-f4ba085f6358#033[00m
Jan 29 12:35:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:55Z|00253|binding|INFO|Setting lport 6ed79400-df99-4990-bcf9-2b653ae874ce ovn-installed in OVS
Jan 29 12:35:55 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:55Z|00254|binding|INFO|Setting lport 6ed79400-df99-4990-bcf9-2b653ae874ce up in Southbound
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.756 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.759 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.763 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[aeffdf19-8ab6-405a-9360-9c0def0329ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.764 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9275d605-e1 in ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:35:55 np0005601226 systemd-machined[207561]: New machine qemu-27-instance-0000001b.
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.766 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9275d605-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.766 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[82ee477d-9350-4bd3-9ae9-badcc74c5cc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.767 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[54a26753-37b1-464c-8bcb-e58d81c4af58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.777 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[b8237a0a-0467-4690-a1ae-cc4b2918bcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.789 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a02f97ee-39b6-41b3-8d4a-f1e4da7438d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 systemd-udevd[274197]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.816 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[a1346ee7-c4db-4576-927e-3132a16d2bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 NetworkManager[49020]: <info>  [1769708155.8212] device (tap6ed79400-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.822 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e0ebbb29-b792-4aee-ad22-e625f7ea98ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 NetworkManager[49020]: <info>  [1769708155.8247] manager: (tap9275d605-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Jan 29 12:35:55 np0005601226 systemd-udevd[274201]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:35:55 np0005601226 NetworkManager[49020]: <info>  [1769708155.8258] device (tap6ed79400-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.850 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[56895e51-d4fb-4c71-a709-b74efa2bed49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.854 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[28161ccc-12a6-4d0f-ab01-60811fd8d4c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 NetworkManager[49020]: <info>  [1769708155.8715] device (tap9275d605-e0): carrier: link connected
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.874 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[7787b52e-9d6b-49ad-a388-8ca384a40eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.889 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b55738bc-7ea4-4774-81ae-6956485b98d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9275d605-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:a6:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550937, 'reachable_time': 44242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274220, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.904 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ead78ea2-ab6a-4a41-90da-31b0e4b47e0f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:a635'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550937, 'tstamp': 550937}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274221, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.925 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[cb32484e-43ef-433c-a542-bb221d63a10a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9275d605-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:a6:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550937, 'reachable_time': 44242, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274222, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.932 239460 DEBUG nova.compute.manager [req-2326eb97-d179-4d67-a715-4abf9eb082ae req-23f7aca1-37f5-48ce-928c-b2f7062c29e2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.932 239460 DEBUG oslo_concurrency.lockutils [req-2326eb97-d179-4d67-a715-4abf9eb082ae req-23f7aca1-37f5-48ce-928c-b2f7062c29e2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.932 239460 DEBUG oslo_concurrency.lockutils [req-2326eb97-d179-4d67-a715-4abf9eb082ae req-23f7aca1-37f5-48ce-928c-b2f7062c29e2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.932 239460 DEBUG oslo_concurrency.lockutils [req-2326eb97-d179-4d67-a715-4abf9eb082ae req-23f7aca1-37f5-48ce-928c-b2f7062c29e2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:55 np0005601226 nova_compute[239456]: 2026-01-29 17:35:55.933 239460 DEBUG nova.compute.manager [req-2326eb97-d179-4d67-a715-4abf9eb082ae req-23f7aca1-37f5-48ce-928c-b2f7062c29e2 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Processing event network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:35:55 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:55.954 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc0f609-91b3-49a0-bcce-b82a55aefffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.000 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[217be023-b390-4a46-98f4-4dc7b73a1308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.001 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9275d605-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.001 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.001 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9275d605-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:56 np0005601226 kernel: tap9275d605-e0: entered promiscuous mode
Jan 29 12:35:56 np0005601226 nova_compute[239456]: 2026-01-29 17:35:56.003 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:56 np0005601226 NetworkManager[49020]: <info>  [1769708156.0053] manager: (tap9275d605-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.006 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9275d605-e0, col_values=(('external_ids', {'iface-id': 'e64dae33-380b-46eb-9272-7f8c7bc07367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:35:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:35:56Z|00255|binding|INFO|Releasing lport e64dae33-380b-46eb-9272-7f8c7bc07367 from this chassis (sb_readonly=0)
Jan 29 12:35:56 np0005601226 nova_compute[239456]: 2026-01-29 17:35:56.017 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.018 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.019 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[269d2097-0d53-4c00-b42d-55780af8fd44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.022 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-9275d605-e314-4c83-a4e8-f4ba085f6358
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 9275d605-e314-4c83-a4e8-f4ba085f6358
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:35:56 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:35:56.023 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'env', 'PROCESS_TAG=haproxy-9275d605-e314-4c83-a4e8-f4ba085f6358', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9275d605-e314-4c83-a4e8-f4ba085f6358.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164434560' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4164434560' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:35:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 202 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 3.4 KiB/s wr, 64 op/s
Jan 29 12:35:56 np0005601226 podman[274290]: 2026-01-29 17:35:56.324500204 +0000 UTC m=+0.049022632 container create 598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 29 12:35:56 np0005601226 systemd[1]: Started libpod-conmon-598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e.scope.
Jan 29 12:35:56 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:35:56 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a5bf70fe68292d58851771cb60ad4635fb2b5bbd283f12598726db028ee8022/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:35:56 np0005601226 podman[274290]: 2026-01-29 17:35:56.291904608 +0000 UTC m=+0.016427066 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:35:56 np0005601226 podman[274290]: 2026-01-29 17:35:56.406616281 +0000 UTC m=+0.131138739 container init 598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:35:56 np0005601226 podman[274290]: 2026-01-29 17:35:56.418429003 +0000 UTC m=+0.142951471 container start 598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:35:56 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[274305]: [NOTICE]   (274309) : New worker (274311) forked
Jan 29 12:35:56 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[274305]: [NOTICE]   (274309) : Loading success.
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Jan 29 12:35:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Jan 29 12:35:57 np0005601226 nova_compute[239456]: 2026-01-29 17:35:57.649 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.030 239460 DEBUG nova.compute.manager [req-440d0483-f54d-420d-82f5-10be6e8ec6df req-9670a3e8-40b5-403e-834e-86fa3d35b051 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.031 239460 DEBUG oslo_concurrency.lockutils [req-440d0483-f54d-420d-82f5-10be6e8ec6df req-9670a3e8-40b5-403e-834e-86fa3d35b051 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.031 239460 DEBUG oslo_concurrency.lockutils [req-440d0483-f54d-420d-82f5-10be6e8ec6df req-9670a3e8-40b5-403e-834e-86fa3d35b051 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.031 239460 DEBUG oslo_concurrency.lockutils [req-440d0483-f54d-420d-82f5-10be6e8ec6df req-9670a3e8-40b5-403e-834e-86fa3d35b051 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.032 239460 DEBUG nova.compute.manager [req-440d0483-f54d-420d-82f5-10be6e8ec6df req-9670a3e8-40b5-403e-834e-86fa3d35b051 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] No waiting events found dispatching network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.032 239460 WARNING nova.compute.manager [req-440d0483-f54d-420d-82f5-10be6e8ec6df req-9670a3e8-40b5-403e-834e-86fa3d35b051 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received unexpected event network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce for instance with vm_state building and task_state spawning.#033[00m
Jan 29 12:35:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 202 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 28 KiB/s wr, 79 op/s
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.375 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708158.3749063, 1c950583-8182-4826-a70d-227f3e018779 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.375 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] VM Started (Lifecycle Event)#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.378 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.381 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.385 239460 INFO nova.virt.libvirt.driver [-] [instance: 1c950583-8182-4826-a70d-227f3e018779] Instance spawned successfully.#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.385 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.402 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.407 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.410 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.411 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.412 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.413 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.414 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.415 239460 DEBUG nova.virt.libvirt.driver [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.435 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.436 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708158.3756886, 1c950583-8182-4826-a70d-227f3e018779 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.436 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.470 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.475 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708158.3810995, 1c950583-8182-4826-a70d-227f3e018779 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.475 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.488 239460 INFO nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Took 6.67 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.489 239460 DEBUG nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.500 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.505 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.538 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.567 239460 INFO nova.compute.manager [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Took 8.95 seconds to build instance.#033[00m
Jan 29 12:35:58 np0005601226 nova_compute[239456]: 2026-01-29 17:35:58.588 239460 DEBUG oslo_concurrency.lockutils [None req-af49befa-fc04-4e9d-b99a-a72860110f5c 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:35:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Jan 29 12:35:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Jan 29 12:35:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Jan 29 12:35:59 np0005601226 nova_compute[239456]: 2026-01-29 17:35:59.496 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 202 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 27 KiB/s wr, 98 op/s
Jan 29 12:36:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Jan 29 12:36:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Jan 29 12:36:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Jan 29 12:36:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Jan 29 12:36:01 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Jan 29 12:36:01 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Jan 29 12:36:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 202 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 980 KiB/s rd, 30 KiB/s wr, 104 op/s
Jan 29 12:36:02 np0005601226 nova_compute[239456]: 2026-01-29 17:36:02.652 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:02 np0005601226 nova_compute[239456]: 2026-01-29 17:36:02.767 239460 DEBUG nova.compute.manager [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-changed-6ed79400-df99-4990-bcf9-2b653ae874ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:02 np0005601226 nova_compute[239456]: 2026-01-29 17:36:02.768 239460 DEBUG nova.compute.manager [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Refreshing instance network info cache due to event network-changed-6ed79400-df99-4990-bcf9-2b653ae874ce. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:36:02 np0005601226 nova_compute[239456]: 2026-01-29 17:36:02.768 239460 DEBUG oslo_concurrency.lockutils [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:36:02 np0005601226 nova_compute[239456]: 2026-01-29 17:36:02.769 239460 DEBUG oslo_concurrency.lockutils [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:36:02 np0005601226 nova_compute[239456]: 2026-01-29 17:36:02.769 239460 DEBUG nova.network.neutron [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Refreshing network info cache for port 6ed79400-df99-4990-bcf9-2b653ae874ce _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:36:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3972176011' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:02 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3972176011' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 202 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 KiB/s wr, 162 op/s
Jan 29 12:36:04 np0005601226 nova_compute[239456]: 2026-01-29 17:36:04.528 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:05 np0005601226 nova_compute[239456]: 2026-01-29 17:36:05.197 239460 DEBUG nova.network.neutron [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Updated VIF entry in instance network info cache for port 6ed79400-df99-4990-bcf9-2b653ae874ce. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:36:05 np0005601226 nova_compute[239456]: 2026-01-29 17:36:05.198 239460 DEBUG nova.network.neutron [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Updating instance_info_cache with network_info: [{"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:36:05 np0005601226 nova_compute[239456]: 2026-01-29 17:36:05.229 239460 DEBUG oslo_concurrency.lockutils [req-6f2cb48a-3b61-44b6-9121-e0378356b5e3 req-26788620-134a-438e-ad5a-8b251689e7b7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-1c950583-8182-4826-a70d-227f3e018779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:36:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e496 do_prune osdmap full prune enabled
Jan 29 12:36:05 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e497 e497: 3 total, 3 up, 3 in
Jan 29 12:36:05 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e497: 3 total, 3 up, 3 in
Jan 29 12:36:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 202 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.7 KiB/s wr, 146 op/s
Jan 29 12:36:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:07 np0005601226 nova_compute[239456]: 2026-01-29 17:36:07.654 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e497 do_prune osdmap full prune enabled
Jan 29 12:36:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e498 e498: 3 total, 3 up, 3 in
Jan 29 12:36:07 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e498: 3 total, 3 up, 3 in
Jan 29 12:36:07 np0005601226 podman[274326]: 2026-01-29 17:36:07.900364114 +0000 UTC m=+0.062193788 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 29 12:36:07 np0005601226 podman[274327]: 2026-01-29 17:36:07.963080336 +0000 UTC m=+0.122359371 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:36:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 202 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 KiB/s wr, 162 op/s
Jan 29 12:36:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1037491249' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:09 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:09 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1037491249' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:09 np0005601226 nova_compute[239456]: 2026-01-29 17:36:09.531 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 202 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 140 op/s
Jan 29 12:36:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:10Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:82:76 10.100.0.3
Jan 29 12:36:10 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:10Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:82:76 10.100.0.3
Jan 29 12:36:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:36:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:36:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:36:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:36:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:36:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:36:11 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:11.745 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:36:11 np0005601226 nova_compute[239456]: 2026-01-29 17:36:11.745 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:11 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:11.747 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:36:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e498 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e498 do_prune osdmap full prune enabled
Jan 29 12:36:11 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e499 e499: 3 total, 3 up, 3 in
Jan 29 12:36:11 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e499: 3 total, 3 up, 3 in
Jan 29 12:36:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 202 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Jan 29 12:36:12 np0005601226 nova_compute[239456]: 2026-01-29 17:36:12.656 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 222 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 652 KiB/s rd, 5.2 MiB/s wr, 138 op/s
Jan 29 12:36:14 np0005601226 nova_compute[239456]: 2026-01-29 17:36:14.541 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 820 KiB/s rd, 8.4 MiB/s wr, 137 op/s
Jan 29 12:36:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e499 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e499 do_prune osdmap full prune enabled
Jan 29 12:36:16 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e500 e500: 3 total, 3 up, 3 in
Jan 29 12:36:16 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e500: 3 total, 3 up, 3 in
Jan 29 12:36:17 np0005601226 nova_compute[239456]: 2026-01-29 17:36:17.657 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 838 KiB/s rd, 7.6 MiB/s wr, 131 op/s
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.612 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.613 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.613 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.613 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.613 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.615 239460 INFO nova.compute.manager [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Terminating instance#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.616 239460 DEBUG nova.compute.manager [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:36:18 np0005601226 kernel: tap6ed79400-df (unregistering): left promiscuous mode
Jan 29 12:36:18 np0005601226 NetworkManager[49020]: <info>  [1769708178.6614] device (tap6ed79400-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.661 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:18Z|00256|binding|INFO|Releasing lport 6ed79400-df99-4990-bcf9-2b653ae874ce from this chassis (sb_readonly=0)
Jan 29 12:36:18 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:18Z|00257|binding|INFO|Setting lport 6ed79400-df99-4990-bcf9-2b653ae874ce down in Southbound
Jan 29 12:36:18 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:18Z|00258|binding|INFO|Removing iface tap6ed79400-df ovn-installed in OVS
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.667 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.675 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:82:76 10.100.0.3'], port_security=['fa:16:3e:84:82:76 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1c950583-8182-4826-a70d-227f3e018779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9275d605-e314-4c83-a4e8-f4ba085f6358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27af6b88-dd81-456a-89a5-a6e9b903fd48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a7d5cc-cff2-487b-9e34-0c3106da1b90, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=6ed79400-df99-4990-bcf9-2b653ae874ce) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.676 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed79400-df99-4990-bcf9-2b653ae874ce in datapath 9275d605-e314-4c83-a4e8-f4ba085f6358 unbound from our chassis#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.677 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.679 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9275d605-e314-4c83-a4e8-f4ba085f6358, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.680 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4e090a-662c-45e1-a2cb-3a3b6cb62500]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.680 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 namespace which is not needed anymore#033[00m
Jan 29 12:36:18 np0005601226 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 29 12:36:18 np0005601226 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 15.048s CPU time.
Jan 29 12:36:18 np0005601226 systemd-machined[207561]: Machine qemu-27-instance-0000001b terminated.
Jan 29 12:36:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e500 do_prune osdmap full prune enabled
Jan 29 12:36:18 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e501 e501: 3 total, 3 up, 3 in
Jan 29 12:36:18 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e501: 3 total, 3 up, 3 in
Jan 29 12:36:18 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[274305]: [NOTICE]   (274309) : haproxy version is 2.8.14-c23fe91
Jan 29 12:36:18 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[274305]: [NOTICE]   (274309) : path to executable is /usr/sbin/haproxy
Jan 29 12:36:18 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[274305]: [WARNING]  (274309) : Exiting Master process...
Jan 29 12:36:18 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[274305]: [ALERT]    (274309) : Current worker (274311) exited with code 143 (Terminated)
Jan 29 12:36:18 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[274305]: [WARNING]  (274309) : All workers exited. Exiting... (0)
Jan 29 12:36:18 np0005601226 systemd[1]: libpod-598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e.scope: Deactivated successfully.
Jan 29 12:36:18 np0005601226 podman[274395]: 2026-01-29 17:36:18.824630004 +0000 UTC m=+0.047833570 container died 598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.832 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.836 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.846 239460 INFO nova.virt.libvirt.driver [-] [instance: 1c950583-8182-4826-a70d-227f3e018779] Instance destroyed successfully.#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.847 239460 DEBUG nova.objects.instance [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'resources' on Instance uuid 1c950583-8182-4826-a70d-227f3e018779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.865 239460 DEBUG nova.virt.libvirt.vif [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:35:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1434218168',display_name='tempest-TestEncryptedCinderVolumes-server-1434218168',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1434218168',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOWwFB+v1u4PWXh8S+wauEigc6l/mXsWkHrP4yUMWKETbc8s3+sIpwLb84UDBVfP1J1Q2qVa0piFJnATY3aZmmPNeYGKVTqN4zZ540CODMnFZL0G2v6B5/DzZoCBhdagGw==',key_name='tempest-TestEncryptedCinderVolumes-1911257081',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:35:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-bonbxfmg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:35:58Z,user_data=None,user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=1c950583-8182-4826-a70d-227f3e018779,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.866 239460 DEBUG nova.network.os_vif_util [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "6ed79400-df99-4990-bcf9-2b653ae874ce", "address": "fa:16:3e:84:82:76", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed79400-df", "ovs_interfaceid": "6ed79400-df99-4990-bcf9-2b653ae874ce", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.867 239460 DEBUG nova.network.os_vif_util [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:82:76,bridge_name='br-int',has_traffic_filtering=True,id=6ed79400-df99-4990-bcf9-2b653ae874ce,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed79400-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.868 239460 DEBUG os_vif [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:82:76,bridge_name='br-int',has_traffic_filtering=True,id=6ed79400-df99-4990-bcf9-2b653ae874ce,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed79400-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.869 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.870 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ed79400-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.872 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.877 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.881 239460 INFO os_vif [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:82:76,bridge_name='br-int',has_traffic_filtering=True,id=6ed79400-df99-4990-bcf9-2b653ae874ce,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed79400-df')#033[00m
Jan 29 12:36:18 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e-userdata-shm.mount: Deactivated successfully.
Jan 29 12:36:18 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0a5bf70fe68292d58851771cb60ad4635fb2b5bbd283f12598726db028ee8022-merged.mount: Deactivated successfully.
Jan 29 12:36:18 np0005601226 podman[274395]: 2026-01-29 17:36:18.896182766 +0000 UTC m=+0.119386372 container cleanup 598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:36:18 np0005601226 systemd[1]: libpod-conmon-598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e.scope: Deactivated successfully.
Jan 29 12:36:18 np0005601226 podman[274451]: 2026-01-29 17:36:18.966103063 +0000 UTC m=+0.045633139 container remove 598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.970 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[19538be3-5fb0-4a0d-b282-36095d88e25f]: (4, ('Thu Jan 29 05:36:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 (598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e)\n598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e\nThu Jan 29 05:36:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 (598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e)\n598c5e79c6e8d5b5562682210e46d03c8765f4a32e1f535425fea099cd780d2e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.972 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[57caf9c8-4628-4bc7-af20-932ae1c03b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.972 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9275d605-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:18 np0005601226 kernel: tap9275d605-e0: left promiscuous mode
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.974 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 nova_compute[239456]: 2026-01-29 17:36:18.980 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.982 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d9173e2e-0ec1-4748-9e4e-a9dc6253ad5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.997 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9df15c-fb50-46db-99e2-cc7a6cf42708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:18 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:18.998 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe7b3dd-7b52-4515-a79a-9240a7ff6ec9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:19 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:19.012 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[e515bd49-7e09-4a4a-a0ec-f23695c8fee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550931, 'reachable_time': 22717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274469, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:19 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:19.016 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:36:19 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:19.016 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4279f4-97d0-449c-9a19-97129d97f4cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:19 np0005601226 systemd[1]: run-netns-ovnmeta\x2d9275d605\x2de314\x2d4c83\x2da4e8\x2df4ba085f6358.mount: Deactivated successfully.
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.066 239460 DEBUG nova.compute.manager [req-7ac8aa66-c305-4e4b-bbd4-5aaa6f0a87e2 req-af7d1854-79b8-4631-8174-8cf4dcf72f8c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-vif-unplugged-6ed79400-df99-4990-bcf9-2b653ae874ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.066 239460 DEBUG oslo_concurrency.lockutils [req-7ac8aa66-c305-4e4b-bbd4-5aaa6f0a87e2 req-af7d1854-79b8-4631-8174-8cf4dcf72f8c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.067 239460 DEBUG oslo_concurrency.lockutils [req-7ac8aa66-c305-4e4b-bbd4-5aaa6f0a87e2 req-af7d1854-79b8-4631-8174-8cf4dcf72f8c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.067 239460 DEBUG oslo_concurrency.lockutils [req-7ac8aa66-c305-4e4b-bbd4-5aaa6f0a87e2 req-af7d1854-79b8-4631-8174-8cf4dcf72f8c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.068 239460 DEBUG nova.compute.manager [req-7ac8aa66-c305-4e4b-bbd4-5aaa6f0a87e2 req-af7d1854-79b8-4631-8174-8cf4dcf72f8c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] No waiting events found dispatching network-vif-unplugged-6ed79400-df99-4990-bcf9-2b653ae874ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.068 239460 DEBUG nova.compute.manager [req-7ac8aa66-c305-4e4b-bbd4-5aaa6f0a87e2 req-af7d1854-79b8-4631-8174-8cf4dcf72f8c 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-vif-unplugged-6ed79400-df99-4990-bcf9-2b653ae874ce for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.191 239460 INFO nova.virt.libvirt.driver [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Deleting instance files /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779_del#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.192 239460 INFO nova.virt.libvirt.driver [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Deletion of /var/lib/nova/instances/1c950583-8182-4826-a70d-227f3e018779_del complete#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.258 239460 INFO nova.compute.manager [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.259 239460 DEBUG oslo.service.loopingcall [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.259 239460 DEBUG nova.compute.manager [-] [instance: 1c950583-8182-4826-a70d-227f3e018779] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:36:19 np0005601226 nova_compute[239456]: 2026-01-29 17:36:19.259 239460 DEBUG nova.network.neutron [-] [instance: 1c950583-8182-4826-a70d-227f3e018779] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:36:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1782658438' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1782658438' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:19 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:19.750 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:20 np0005601226 nova_compute[239456]: 2026-01-29 17:36:20.092 239460 DEBUG nova.network.neutron [-] [instance: 1c950583-8182-4826-a70d-227f3e018779] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:36:20 np0005601226 nova_compute[239456]: 2026-01-29 17:36:20.112 239460 INFO nova.compute.manager [-] [instance: 1c950583-8182-4826-a70d-227f3e018779] Took 0.85 seconds to deallocate network for instance.#033[00m
Jan 29 12:36:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 869 KiB/s rd, 7.6 MiB/s wr, 171 op/s
Jan 29 12:36:20 np0005601226 nova_compute[239456]: 2026-01-29 17:36:20.220 239460 DEBUG nova.compute.manager [req-25e971de-c128-463c-993c-8716309bf6cd req-bf7e5490-1e25-4a65-82dc-40a46aab7ab3 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-vif-deleted-6ed79400-df99-4990-bcf9-2b653ae874ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:20 np0005601226 nova_compute[239456]: 2026-01-29 17:36:20.360 239460 INFO nova.compute.manager [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:36:20 np0005601226 nova_compute[239456]: 2026-01-29 17:36:20.427 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:20 np0005601226 nova_compute[239456]: 2026-01-29 17:36:20.428 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:20 np0005601226 nova_compute[239456]: 2026-01-29 17:36:20.500 239460 DEBUG oslo_concurrency.processutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:36:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762794279' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.155 239460 DEBUG nova.compute.manager [req-5b55b6a5-82f7-472c-93aa-e4817961647b req-7c9ae41f-9e52-4df1-8fe9-e87e8444dbc9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received event network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.156 239460 DEBUG oslo_concurrency.lockutils [req-5b55b6a5-82f7-472c-93aa-e4817961647b req-7c9ae41f-9e52-4df1-8fe9-e87e8444dbc9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "1c950583-8182-4826-a70d-227f3e018779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.156 239460 DEBUG oslo_concurrency.lockutils [req-5b55b6a5-82f7-472c-93aa-e4817961647b req-7c9ae41f-9e52-4df1-8fe9-e87e8444dbc9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.156 239460 DEBUG oslo_concurrency.lockutils [req-5b55b6a5-82f7-472c-93aa-e4817961647b req-7c9ae41f-9e52-4df1-8fe9-e87e8444dbc9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.157 239460 DEBUG nova.compute.manager [req-5b55b6a5-82f7-472c-93aa-e4817961647b req-7c9ae41f-9e52-4df1-8fe9-e87e8444dbc9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] No waiting events found dispatching network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.157 239460 WARNING nova.compute.manager [req-5b55b6a5-82f7-472c-93aa-e4817961647b req-7c9ae41f-9e52-4df1-8fe9-e87e8444dbc9 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 1c950583-8182-4826-a70d-227f3e018779] Received unexpected event network-vif-plugged-6ed79400-df99-4990-bcf9-2b653ae874ce for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.173 239460 DEBUG oslo_concurrency.processutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.178 239460 DEBUG nova.compute.provider_tree [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.193 239460 DEBUG nova.scheduler.client.report [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.224 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.260 239460 INFO nova.scheduler.client.report [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Deleted allocations for instance 1c950583-8182-4826-a70d-227f3e018779#033[00m
Jan 29 12:36:21 np0005601226 nova_compute[239456]: 2026-01-29 17:36:21.345 239460 DEBUG oslo_concurrency.lockutils [None req-9a45968e-2590-49b7-b8b0-39a7a8d548de 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "1c950583-8182-4826-a70d-227f3e018779" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e501 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 3.5 MiB/s wr, 64 op/s
Jan 29 12:36:22 np0005601226 nova_compute[239456]: 2026-01-29 17:36:22.696 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e501 do_prune osdmap full prune enabled
Jan 29 12:36:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e502 e502: 3 total, 3 up, 3 in
Jan 29 12:36:22 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e502: 3 total, 3 up, 3 in
Jan 29 12:36:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:36:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1367389999' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:36:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e502 do_prune osdmap full prune enabled
Jan 29 12:36:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e503 e503: 3 total, 3 up, 3 in
Jan 29 12:36:23 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e503: 3 total, 3 up, 3 in
Jan 29 12:36:23 np0005601226 nova_compute[239456]: 2026-01-29 17:36:23.873 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 34 KiB/s wr, 101 op/s
Jan 29 12:36:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642420957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3642420957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 7.9 KiB/s wr, 125 op/s
Jan 29 12:36:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:27 np0005601226 nova_compute[239456]: 2026-01-29 17:36:27.699 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e503 do_prune osdmap full prune enabled
Jan 29 12:36:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e504 e504: 3 total, 3 up, 3 in
Jan 29 12:36:27 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e504: 3 total, 3 up, 3 in
Jan 29 12:36:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 6.0 KiB/s wr, 103 op/s
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.390 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.390 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.410 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.506 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.506 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.513 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.513 239460 INFO nova.compute.claims [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.620 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:28 np0005601226 nova_compute[239456]: 2026-01-29 17:36:28.928 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e504 do_prune osdmap full prune enabled
Jan 29 12:36:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e505 e505: 3 total, 3 up, 3 in
Jan 29 12:36:29 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e505: 3 total, 3 up, 3 in
Jan 29 12:36:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:36:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/567488695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.116 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.121 239460 DEBUG nova.compute.provider_tree [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.144 239460 DEBUG nova.scheduler.client.report [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.171 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.172 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.225 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.226 239460 DEBUG nova.network.neutron [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.245 239460 INFO nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.267 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.316 239460 INFO nova.virt.block_device [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Booting with volume 10e2c6f1-e7fc-4be6-aed9-6868df98398e at /dev/vda#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.441 239460 DEBUG os_brick.utils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.442 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.453 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.453 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf94b1e-7b34-4cd8-a30f-be009d31c2c2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.454 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.460 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.460 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a62209-230e-4d3f-8817-5622afe4150a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.461 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.469 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.469 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f6eff2-9c03-41ce-a83c-5ba7877288e6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.470 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[c8631c0c-39ce-4747-8025-205af9cb613e]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.470 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.486 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.488 239460 DEBUG os_brick.initiator.connectors.lightos [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.488 239460 DEBUG os_brick.initiator.connectors.lightos [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.489 239460 DEBUG os_brick.initiator.connectors.lightos [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.489 239460 DEBUG os_brick.utils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.489 239460 DEBUG nova.virt.block_device [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updating existing volume attachment record: 7c6e64fb-110d-499d-bc7a-55a9bd2b567a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:36:29 np0005601226 nova_compute[239456]: 2026-01-29 17:36:29.905 239460 DEBUG nova.policy [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90bbb3ba09534f74aedaab7650ed5ba4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:36:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 4.6 KiB/s wr, 107 op/s
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/84597611' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/905030524' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/905030524' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3189288017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3189288017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.631 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.632 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.633 239460 INFO nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Creating image(s)#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.633 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.633 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Ensure instance console log exists: /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.633 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.634 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.634 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:30 np0005601226 nova_compute[239456]: 2026-01-29 17:36:30.902 239460 DEBUG nova.network.neutron [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Successfully created port: da0c0c44-fc4a-4778-ad1a-08a01c5d459c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.730 239460 DEBUG nova.network.neutron [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Successfully updated port: da0c0c44-fc4a-4778-ad1a-08a01c5d459c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.746 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.747 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquired lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.747 239460 DEBUG nova.network.neutron [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:36:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e505 do_prune osdmap full prune enabled
Jan 29 12:36:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e506 e506: 3 total, 3 up, 3 in
Jan 29 12:36:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e506: 3 total, 3 up, 3 in
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.816 239460 DEBUG nova.compute.manager [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-changed-da0c0c44-fc4a-4778-ad1a-08a01c5d459c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.817 239460 DEBUG nova.compute.manager [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Refreshing instance network info cache due to event network-changed-da0c0c44-fc4a-4778-ad1a-08a01c5d459c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.817 239460 DEBUG oslo_concurrency.lockutils [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:36:31 np0005601226 nova_compute[239456]: 2026-01-29 17:36:31.875 239460 DEBUG nova.network.neutron [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:36:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.0 KiB/s wr, 55 op/s
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.546 239460 DEBUG nova.network.neutron [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updating instance_info_cache with network_info: [{"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.572 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Releasing lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.573 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Instance network_info: |[{"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.574 239460 DEBUG oslo_concurrency.lockutils [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.574 239460 DEBUG nova.network.neutron [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Refreshing network info cache for port da0c0c44-fc4a-4778-ad1a-08a01c5d459c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.580 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Start _get_guest_xml network_info=[{"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'guest_format': None, 'mount_device': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'attachment_id': '7c6e64fb-110d-499d-bc7a-55a9bd2b567a', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-10e2c6f1-e7fc-4be6-aed9-6868df98398e', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '10e2c6f1-e7fc-4be6-aed9-6868df98398e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '59a43be4-0b8c-4ad3-abba-c60a6c6e9aae', 'attached_at': '', 'detached_at': '', 'volume_id': '10e2c6f1-e7fc-4be6-aed9-6868df98398e', 'serial': '10e2c6f1-e7fc-4be6-aed9-6868df98398e'}, 'delete_on_termination': False, 'boot_index': 0, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.585 239460 WARNING nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.592 239460 DEBUG nova.virt.libvirt.host [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.594 239460 DEBUG nova.virt.libvirt.host [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.599 239460 DEBUG nova.virt.libvirt.host [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.600 239460 DEBUG nova.virt.libvirt.host [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.601 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.601 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.602 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.603 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.603 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.604 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.604 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.605 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.605 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.606 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.606 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.606 239460 DEBUG nova.virt.hardware [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.647 239460 DEBUG nova.storage.rbd_utils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.652 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:32 np0005601226 nova_compute[239456]: 2026-01-29 17:36:32.734 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e506 do_prune osdmap full prune enabled
Jan 29 12:36:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e507 e507: 3 total, 3 up, 3 in
Jan 29 12:36:32 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e507: 3 total, 3 up, 3 in
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1726338588' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.207 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.351 239460 DEBUG os_brick.encryptors [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Using volume encryption metadata '{'encryption_key_id': 'dbd5d945-3595-498d-b6b9-81e87eb7d541', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-10e2c6f1-e7fc-4be6-aed9-6868df98398e', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '10e2c6f1-e7fc-4be6-aed9-6868df98398e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '59a43be4-0b8c-4ad3-abba-c60a6c6e9aae', 'attached_at': '', 'detached_at': '', 'volume_id': '10e2c6f1-e7fc-4be6-aed9-6868df98398e', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.353 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.368 239460 DEBUG barbicanclient.v1.secrets [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.368 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.389 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.389 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.415 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.416 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.445 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.446 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.464 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.465 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.591 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.592 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.603 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.617 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.618 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.637 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.638 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.665 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.667 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.692 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.693 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.719 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.720 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.747 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.748 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.777 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.778 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.802 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.802 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:36:33 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.821 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.822 239460 INFO barbicanclient.base [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Calculated Secrets uuid ref: secrets/dbd5d945-3595-498d-b6b9-81e87eb7d541#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.843 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769708178.842651, 1c950583-8182-4826-a70d-227f3e018779 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.844 239460 INFO nova.compute.manager [-] [instance: 1c950583-8182-4826-a70d-227f3e018779] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.850 239460 DEBUG barbicanclient.client [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.850 239460 DEBUG nova.virt.libvirt.host [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Secret XML: <secret ephemeral="no" private="no">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <usage type="volume">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <volume>10e2c6f1-e7fc-4be6-aed9-6868df98398e</volume>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </usage>
Jan 29 12:36:33 np0005601226 nova_compute[239456]: </secret>
Jan 29 12:36:33 np0005601226 nova_compute[239456]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.879 239460 DEBUG nova.compute.manager [None req-436aba73-3a1b-4642-98bb-061a0ee2d63f - - - - - -] [instance: 1c950583-8182-4826-a70d-227f3e018779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.883 239460 DEBUG nova.virt.libvirt.vif [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:36:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-958531094',display_name='tempest-TestEncryptedCinderVolumes-server-958531094',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-958531094',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOWwFB+v1u4PWXh8S+wauEigc6l/mXsWkHrP4yUMWKETbc8s3+sIpwLb84UDBVfP1J1Q2qVa0piFJnATY3aZmmPNeYGKVTqN4zZ540CODMnFZL0G2v6B5/DzZoCBhdagGw==',key_name='tempest-TestEncryptedCinderVolumes-1911257081',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-dl3n7k58',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:36:29Z,user_data=None,user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=59a43be4-0b8c-4ad3-abba-c60a6c6e9aae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.883 239460 DEBUG nova.network.os_vif_util [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.884 239460 DEBUG nova.network.os_vif_util [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=da0c0c44-fc4a-4778-ad1a-08a01c5d459c,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda0c0c44-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.885 239460 DEBUG nova.objects.instance [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.898 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <uuid>59a43be4-0b8c-4ad3-abba-c60a6c6e9aae</uuid>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <name>instance-0000001c</name>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-958531094</nova:name>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:36:32</nova:creationTime>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:user uuid="90bbb3ba09534f74aedaab7650ed5ba4">tempest-TestEncryptedCinderVolumes-595928636-project-member</nova:user>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:project uuid="9c3315c8b4c543a38f07ec0c509f03c1">tempest-TestEncryptedCinderVolumes-595928636</nova:project>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <nova:port uuid="da0c0c44-fc4a-4778-ad1a-08a01c5d459c">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <entry name="serial">59a43be4-0b8c-4ad3-abba-c60a6c6e9aae</entry>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <entry name="uuid">59a43be4-0b8c-4ad3-abba-c60a6c6e9aae</entry>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_disk.config">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="volumes/volume-10e2c6f1-e7fc-4be6-aed9-6868df98398e">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <serial>10e2c6f1-e7fc-4be6-aed9-6868df98398e</serial>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <encryption format="luks">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:        <secret type="passphrase" uuid="81aebca3-527e-4020-9661-0af71bfce4f4"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      </encryption>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:f2:40:a3"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <target dev="tapda0c0c44-fc"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/console.log" append="off"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:36:33 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:36:33 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:36:33 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:36:33 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:36:33 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.899 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Preparing to wait for external event network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.899 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.900 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.900 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.900 239460 DEBUG nova.virt.libvirt.vif [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:36:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-958531094',display_name='tempest-TestEncryptedCinderVolumes-server-958531094',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-958531094',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOWwFB+v1u4PWXh8S+wauEigc6l/mXsWkHrP4yUMWKETbc8s3+sIpwLb84UDBVfP1J1Q2qVa0piFJnATY3aZmmPNeYGKVTqN4zZ540CODMnFZL0G2v6B5/DzZoCBhdagGw==',key_name='tempest-TestEncryptedCinderVolumes-1911257081',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-dl3n7k58',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:36:29Z,user_data=None,user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=59a43be4-0b8c-4ad3-abba-c60a6c6e9aae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.901 239460 DEBUG nova.network.os_vif_util [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.901 239460 DEBUG nova.network.os_vif_util [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=da0c0c44-fc4a-4778-ad1a-08a01c5d459c,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda0c0c44-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.901 239460 DEBUG os_vif [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=da0c0c44-fc4a-4778-ad1a-08a01c5d459c,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda0c0c44-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.902 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.902 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.903 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.907 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.909 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda0c0c44-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.910 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda0c0c44-fc, col_values=(('external_ids', {'iface-id': 'da0c0c44-fc4a-4778-ad1a-08a01c5d459c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:40:a3', 'vm-uuid': '59a43be4-0b8c-4ad3-abba-c60a6c6e9aae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.954 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.957 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:36:33 np0005601226 NetworkManager[49020]: <info>  [1769708193.9589] manager: (tapda0c0c44-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.962 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:33 np0005601226 nova_compute[239456]: 2026-01-29 17:36:33.963 239460 INFO os_vif [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=da0c0c44-fc4a-4778-ad1a-08a01c5d459c,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda0c0c44-fc')#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.017 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.019 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.020 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] No VIF found with MAC fa:16:3e:f2:40:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.021 239460 INFO nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Using config drive#033[00m
Jan 29 12:36:34 np0005601226 podman[274710]: 2026-01-29 17:36:34.039715006 +0000 UTC m=+0.046854872 container create 4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.044 239460 DEBUG nova.storage.rbd_utils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.066 239460 DEBUG nova.network.neutron [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updated VIF entry in instance network info cache for port da0c0c44-fc4a-4778-ad1a-08a01c5d459c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.067 239460 DEBUG nova.network.neutron [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updating instance_info_cache with network_info: [{"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:36:34 np0005601226 systemd[1]: Started libpod-conmon-4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654.scope.
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.085 239460 DEBUG oslo_concurrency.lockutils [req-62ac137d-6910-4ab1-b29f-c8a30ee78a95 req-1b884e64-9b29-4b94-82e2-fedb4a604908 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:36:34 np0005601226 podman[274710]: 2026-01-29 17:36:34.015381316 +0000 UTC m=+0.022521262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:36:34 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:36:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 3.3 KiB/s wr, 101 op/s
Jan 29 12:36:34 np0005601226 podman[274710]: 2026-01-29 17:36:34.129363129 +0000 UTC m=+0.136503105 container init 4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:36:34 np0005601226 podman[274710]: 2026-01-29 17:36:34.139121234 +0000 UTC m=+0.146261100 container start 4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:36:34 np0005601226 objective_bartik[274742]: 167 167
Jan 29 12:36:34 np0005601226 systemd[1]: libpod-4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654.scope: Deactivated successfully.
Jan 29 12:36:34 np0005601226 podman[274710]: 2026-01-29 17:36:34.145664052 +0000 UTC m=+0.152803918 container attach 4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 29 12:36:34 np0005601226 podman[274710]: 2026-01-29 17:36:34.146434143 +0000 UTC m=+0.153574049 container died 4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:36:34 np0005601226 systemd[1]: var-lib-containers-storage-overlay-c2e39c5e9dcc8251398e957b77080430ec7b704320e9485d77a4b3a68f29c9dd-merged.mount: Deactivated successfully.
Jan 29 12:36:34 np0005601226 podman[274710]: 2026-01-29 17:36:34.204311353 +0000 UTC m=+0.211451259 container remove 4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 12:36:34 np0005601226 systemd[1]: libpod-conmon-4b3f155879c3e39801645a15efa6ead397d843c1378ea85011c272f4d3273654.scope: Deactivated successfully.
Jan 29 12:36:34 np0005601226 podman[274768]: 2026-01-29 17:36:34.350596183 +0000 UTC m=+0.045097965 container create 699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.373 239460 INFO nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Creating config drive at /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/disk.config#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.380 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3rplrke execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:34 np0005601226 systemd[1]: Started libpod-conmon-699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05.scope.
Jan 29 12:36:34 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:36:34 np0005601226 podman[274768]: 2026-01-29 17:36:34.327714712 +0000 UTC m=+0.022216574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:36:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de801e1a09ebe75577db7e29801ac22ca3731f6b069ce286beffd1ce0f83db2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de801e1a09ebe75577db7e29801ac22ca3731f6b069ce286beffd1ce0f83db2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de801e1a09ebe75577db7e29801ac22ca3731f6b069ce286beffd1ce0f83db2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de801e1a09ebe75577db7e29801ac22ca3731f6b069ce286beffd1ce0f83db2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:34 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de801e1a09ebe75577db7e29801ac22ca3731f6b069ce286beffd1ce0f83db2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:34 np0005601226 podman[274768]: 2026-01-29 17:36:34.440596006 +0000 UTC m=+0.135097818 container init 699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:36:34 np0005601226 podman[274768]: 2026-01-29 17:36:34.454663898 +0000 UTC m=+0.149165720 container start 699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 12:36:34 np0005601226 podman[274768]: 2026-01-29 17:36:34.457738911 +0000 UTC m=+0.152240723 container attach 699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.511 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3rplrke" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.549 239460 DEBUG nova.storage.rbd_utils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] rbd image 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.554 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/disk.config 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.719 239460 DEBUG oslo_concurrency.processutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/disk.config 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.721 239460 INFO nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Deleting local config drive /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae/disk.config because it was imported into RBD.#033[00m
Jan 29 12:36:34 np0005601226 NetworkManager[49020]: <info>  [1769708194.7666] manager: (tapda0c0c44-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Jan 29 12:36:34 np0005601226 kernel: tapda0c0c44-fc: entered promiscuous mode
Jan 29 12:36:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:34Z|00259|binding|INFO|Claiming lport da0c0c44-fc4a-4778-ad1a-08a01c5d459c for this chassis.
Jan 29 12:36:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:34Z|00260|binding|INFO|da0c0c44-fc4a-4778-ad1a-08a01c5d459c: Claiming fa:16:3e:f2:40:a3 10.100.0.13
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.769 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.778 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:40:a3 10.100.0.13'], port_security=['fa:16:3e:f2:40:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '59a43be4-0b8c-4ad3-abba-c60a6c6e9aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9275d605-e314-4c83-a4e8-f4ba085f6358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '27af6b88-dd81-456a-89a5-a6e9b903fd48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a7d5cc-cff2-487b-9e34-0c3106da1b90, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=da0c0c44-fc4a-4778-ad1a-08a01c5d459c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.780 155625 INFO neutron.agent.ovn.metadata.agent [-] Port da0c0c44-fc4a-4778-ad1a-08a01c5d459c in datapath 9275d605-e314-4c83-a4e8-f4ba085f6358 bound to our chassis#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.781 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9275d605-e314-4c83-a4e8-f4ba085f6358#033[00m
Jan 29 12:36:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:34Z|00261|binding|INFO|Setting lport da0c0c44-fc4a-4778-ad1a-08a01c5d459c ovn-installed in OVS
Jan 29 12:36:34 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:34Z|00262|binding|INFO|Setting lport da0c0c44-fc4a-4778-ad1a-08a01c5d459c up in Southbound
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.786 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:34 np0005601226 nova_compute[239456]: 2026-01-29 17:36:34.789 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:34 np0005601226 systemd-machined[207561]: New machine qemu-28-instance-0000001c.
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.794 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3cfd1d3f-55a3-4750-a3e4-34daa6757fe8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.796 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9275d605-e1 in ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.798 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9275d605-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.798 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[931193f9-a29e-4ee2-9ab6-edb50a559cbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.800 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[1946eb92-3418-4fc3-85a4-5711e4224e2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 systemd-udevd[274850]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:36:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e507 do_prune osdmap full prune enabled
Jan 29 12:36:34 np0005601226 NetworkManager[49020]: <info>  [1769708194.8143] device (tapda0c0c44-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.813 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[519db7ee-9da8-4ab6-8ad6-063083417ead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e508 e508: 3 total, 3 up, 3 in
Jan 29 12:36:34 np0005601226 NetworkManager[49020]: <info>  [1769708194.8228] device (tapda0c0c44-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:36:34 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e508: 3 total, 3 up, 3 in
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.829 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0e33ba-af32-497d-8b45-20836bd16b10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.853 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[4bed2e61-763c-4cd8-8093-013d069f6f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 systemd-udevd[274855]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:36:34 np0005601226 NetworkManager[49020]: <info>  [1769708194.8597] manager: (tap9275d605-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.859 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[11ee77b9-2616-46bb-bd04-b27ec7b160ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.884 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[c205ee23-2f19-47a2-a2d2-5e7817f388d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.888 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ee43ed-77f2-4f42-9615-c1adc3f70096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 compassionate_banzai[274785]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:36:34 np0005601226 compassionate_banzai[274785]: --> All data devices are unavailable
Jan 29 12:36:34 np0005601226 NetworkManager[49020]: <info>  [1769708194.9045] device (tap9275d605-e0): carrier: link connected
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.909 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[a8423f77-c328-434b-be27-66a2d3d48954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.924 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d50ebc6c-b423-425a-8366-d621c898279d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9275d605-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:a6:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554840, 'reachable_time': 39896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274889, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 systemd[1]: libpod-699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05.scope: Deactivated successfully.
Jan 29 12:36:34 np0005601226 conmon[274785]: conmon 699770dd8bef864d3f47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05.scope/container/memory.events
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.940 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a99f33-7d32-4cee-b019-be375cb369ae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:a635'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554840, 'tstamp': 554840}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274890, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.955 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b7490c3c-47ec-4046-9f00-96592188fb93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9275d605-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:a6:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554840, 'reachable_time': 39896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274892, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:34 np0005601226 podman[274891]: 2026-01-29 17:36:34.977566398 +0000 UTC m=+0.027563868 container died 699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 29 12:36:34 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:34.989 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[32de09ca-a7c0-4d08-976b-9c3f6dcc90e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:35 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6de801e1a09ebe75577db7e29801ac22ca3731f6b069ce286beffd1ce0f83db2-merged.mount: Deactivated successfully.
Jan 29 12:36:35 np0005601226 podman[274891]: 2026-01-29 17:36:35.032049687 +0000 UTC m=+0.082047197 container remove 699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:36:35 np0005601226 systemd[1]: libpod-conmon-699770dd8bef864d3f479a1cde97e140fd92ff56270876e0640f19c1e86bee05.scope: Deactivated successfully.
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.050 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[866f3027-81ee-4176-8a99-08bbfa0271db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.052 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9275d605-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.053 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.053 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9275d605-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:35 np0005601226 kernel: tap9275d605-e0: entered promiscuous mode
Jan 29 12:36:35 np0005601226 NetworkManager[49020]: <info>  [1769708195.1025] manager: (tap9275d605-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.102 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.105 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.106 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9275d605-e0, col_values=(('external_ids', {'iface-id': 'e64dae33-380b-46eb-9272-7f8c7bc07367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.107 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:35 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:35Z|00263|binding|INFO|Releasing lport e64dae33-380b-46eb-9272-7f8c7bc07367 from this chassis (sb_readonly=0)
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.107 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.108 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.110 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[7f936872-11bf-42c8-97b1-df1b0b4ecc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.111 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-9275d605-e314-4c83-a4e8-f4ba085f6358
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/9275d605-e314-4c83-a4e8-f4ba085f6358.pid.haproxy
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 9275d605-e314-4c83-a4e8-f4ba085f6358
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.113 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:35 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:35.113 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'env', 'PROCESS_TAG=haproxy-9275d605-e314-4c83-a4e8-f4ba085f6358', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9275d605-e314-4c83-a4e8-f4ba085f6358.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.119 239460 DEBUG nova.compute.manager [req-53fd3cb5-a5a4-4bdb-ae55-b2fbd835a99a req-22816846-f141-4ff4-8018-798344458cbc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.120 239460 DEBUG oslo_concurrency.lockutils [req-53fd3cb5-a5a4-4bdb-ae55-b2fbd835a99a req-22816846-f141-4ff4-8018-798344458cbc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.120 239460 DEBUG oslo_concurrency.lockutils [req-53fd3cb5-a5a4-4bdb-ae55-b2fbd835a99a req-22816846-f141-4ff4-8018-798344458cbc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.121 239460 DEBUG oslo_concurrency.lockutils [req-53fd3cb5-a5a4-4bdb-ae55-b2fbd835a99a req-22816846-f141-4ff4-8018-798344458cbc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:35 np0005601226 nova_compute[239456]: 2026-01-29 17:36:35.121 239460 DEBUG nova.compute.manager [req-53fd3cb5-a5a4-4bdb-ae55-b2fbd835a99a req-22816846-f141-4ff4-8018-798344458cbc 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Processing event network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:36:35 np0005601226 podman[275034]: 2026-01-29 17:36:35.460675149 +0000 UTC m=+0.037488659 container create 19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_brahmagupta, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:36:35 np0005601226 podman[275033]: 2026-01-29 17:36:35.48025255 +0000 UTC m=+0.059526416 container create e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 29 12:36:35 np0005601226 systemd[1]: Started libpod-conmon-19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424.scope.
Jan 29 12:36:35 np0005601226 systemd[1]: Started libpod-conmon-e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f.scope.
Jan 29 12:36:35 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:36:35 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:36:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f430a103e0d8489ca70427f6b885038724aa4c70bbd41d10e967b3aaac637d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:35 np0005601226 podman[275034]: 2026-01-29 17:36:35.535403447 +0000 UTC m=+0.112216937 container init 19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_brahmagupta, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:36:35 np0005601226 podman[275034]: 2026-01-29 17:36:35.444333046 +0000 UTC m=+0.021146576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:36:35 np0005601226 podman[275034]: 2026-01-29 17:36:35.544085343 +0000 UTC m=+0.120898823 container start 19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:36:35 np0005601226 vigorous_brahmagupta[275068]: 167 167
Jan 29 12:36:35 np0005601226 systemd[1]: libpod-19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424.scope: Deactivated successfully.
Jan 29 12:36:35 np0005601226 podman[275033]: 2026-01-29 17:36:35.453588666 +0000 UTC m=+0.032862552 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:36:35 np0005601226 podman[275034]: 2026-01-29 17:36:35.549711075 +0000 UTC m=+0.126524595 container attach 19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 29 12:36:35 np0005601226 podman[275034]: 2026-01-29 17:36:35.550942419 +0000 UTC m=+0.127755939 container died 19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:36:35 np0005601226 podman[275033]: 2026-01-29 17:36:35.588747805 +0000 UTC m=+0.168021671 container init e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 29 12:36:35 np0005601226 podman[275033]: 2026-01-29 17:36:35.592703802 +0000 UTC m=+0.171977668 container start e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:36:35 np0005601226 systemd[1]: var-lib-containers-storage-overlay-31931cc7a07285edd1fe422565f86e8745898a0131c50c00a95bf3fc2a7af992-merged.mount: Deactivated successfully.
Jan 29 12:36:35 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [NOTICE]   (275088) : New worker (275090) forked
Jan 29 12:36:35 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [NOTICE]   (275088) : Loading success.
Jan 29 12:36:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686265398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686265398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:35 np0005601226 podman[275034]: 2026-01-29 17:36:35.623733644 +0000 UTC m=+0.200547134 container remove 19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 29 12:36:35 np0005601226 systemd[1]: libpod-conmon-19cc8e7f56a3096d11c2684980091e8b002f3d403ac11bbdf40ec0e997a90424.scope: Deactivated successfully.
Jan 29 12:36:35 np0005601226 podman[275106]: 2026-01-29 17:36:35.813308909 +0000 UTC m=+0.052370172 container create 38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:36:35 np0005601226 systemd[1]: Started libpod-conmon-38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796.scope.
Jan 29 12:36:35 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:36:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a532ebd8d54dfe4def441e60d092c664ee03b1c0dcfb93e43385c506aa720a4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:35 np0005601226 podman[275106]: 2026-01-29 17:36:35.793575153 +0000 UTC m=+0.032636406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:36:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a532ebd8d54dfe4def441e60d092c664ee03b1c0dcfb93e43385c506aa720a4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a532ebd8d54dfe4def441e60d092c664ee03b1c0dcfb93e43385c506aa720a4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:35 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a532ebd8d54dfe4def441e60d092c664ee03b1c0dcfb93e43385c506aa720a4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:36 np0005601226 podman[275106]: 2026-01-29 17:36:36.023950846 +0000 UTC m=+0.263012109 container init 38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:36:36 np0005601226 podman[275106]: 2026-01-29 17:36:36.03516037 +0000 UTC m=+0.274221593 container start 38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 29 12:36:36 np0005601226 podman[275106]: 2026-01-29 17:36:36.0388715 +0000 UTC m=+0.277932813 container attach 38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:36:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.7 KiB/s wr, 73 op/s
Jan 29 12:36:36 np0005601226 silly_clarke[275122]: {
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:    "0": [
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:        {
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "devices": [
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "/dev/loop3"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            ],
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_name": "ceph_lv0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_size": "21470642176",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "name": "ceph_lv0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "tags": {
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cluster_name": "ceph",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.crush_device_class": "",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.encrypted": "0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.objectstore": "bluestore",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osd_id": "0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.type": "block",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.vdo": "0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.with_tpm": "0"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            },
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "type": "block",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "vg_name": "ceph_vg0"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:        }
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:    ],
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:    "1": [
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:        {
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "devices": [
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "/dev/loop4"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            ],
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_name": "ceph_lv1",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_size": "21470642176",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "name": "ceph_lv1",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "tags": {
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cluster_name": "ceph",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.crush_device_class": "",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.encrypted": "0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.objectstore": "bluestore",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osd_id": "1",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.type": "block",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.vdo": "0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.with_tpm": "0"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            },
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "type": "block",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "vg_name": "ceph_vg1"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:        }
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:    ],
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:    "2": [
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:        {
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "devices": [
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "/dev/loop5"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            ],
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_name": "ceph_lv2",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_size": "21470642176",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "name": "ceph_lv2",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "tags": {
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.cluster_name": "ceph",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.crush_device_class": "",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.encrypted": "0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.objectstore": "bluestore",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osd_id": "2",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.type": "block",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.vdo": "0",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:                "ceph.with_tpm": "0"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            },
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "type": "block",
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:            "vg_name": "ceph_vg2"
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:        }
Jan 29 12:36:36 np0005601226 silly_clarke[275122]:    ]
Jan 29 12:36:36 np0005601226 silly_clarke[275122]: }
Jan 29 12:36:36 np0005601226 systemd[1]: libpod-38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796.scope: Deactivated successfully.
Jan 29 12:36:36 np0005601226 podman[275106]: 2026-01-29 17:36:36.38044582 +0000 UTC m=+0.619507073 container died 38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:36:36 np0005601226 podman[275106]: 2026-01-29 17:36:36.441756474 +0000 UTC m=+0.680817727 container remove 38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_clarke, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 29 12:36:36 np0005601226 systemd[1]: libpod-conmon-38e01de4168930ffea1474415d1160e64ca5e82ad053e5f33944c5b69a11e796.scope: Deactivated successfully.
Jan 29 12:36:36 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a532ebd8d54dfe4def441e60d092c664ee03b1c0dcfb93e43385c506aa720a4d-merged.mount: Deactivated successfully.
Jan 29 12:36:36 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e508 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:36 np0005601226 podman[275203]: 2026-01-29 17:36:36.943178542 +0000 UTC m=+0.067653877 container create 0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:36:36 np0005601226 systemd[1]: Started libpod-conmon-0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9.scope.
Jan 29 12:36:36 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:36:37 np0005601226 podman[275203]: 2026-01-29 17:36:37.00650627 +0000 UTC m=+0.130981625 container init 0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 29 12:36:37 np0005601226 podman[275203]: 2026-01-29 17:36:37.011321801 +0000 UTC m=+0.135797136 container start 0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_robinson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:36:37 np0005601226 bold_robinson[275217]: 167 167
Jan 29 12:36:37 np0005601226 systemd[1]: libpod-0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9.scope: Deactivated successfully.
Jan 29 12:36:37 np0005601226 podman[275203]: 2026-01-29 17:36:36.92357849 +0000 UTC m=+0.048053825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:36:37 np0005601226 podman[275203]: 2026-01-29 17:36:37.019804161 +0000 UTC m=+0.144279516 container attach 0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_robinson, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 29 12:36:37 np0005601226 podman[275203]: 2026-01-29 17:36:37.020087559 +0000 UTC m=+0.144562894 container died 0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:36:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e5e859df09547a777b33775450f36efd4354e08bd187b14674dd7ac6bfcce480-merged.mount: Deactivated successfully.
Jan 29 12:36:37 np0005601226 podman[275203]: 2026-01-29 17:36:37.062566501 +0000 UTC m=+0.187041816 container remove 0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 29 12:36:37 np0005601226 systemd[1]: libpod-conmon-0746f31ee6e85d9847108c14fb2c3e49a62835bc6f13da056313efedf60326e9.scope: Deactivated successfully.
Jan 29 12:36:37 np0005601226 podman[275243]: 2026-01-29 17:36:37.193613268 +0000 UTC m=+0.041274631 container create 124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.213 239460 DEBUG nova.compute.manager [req-cd8cca12-b15f-45c1-9158-0f8f06b38105 req-4d9cd2b9-040d-462f-8579-bb9f3821fc14 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.216 239460 DEBUG oslo_concurrency.lockutils [req-cd8cca12-b15f-45c1-9158-0f8f06b38105 req-4d9cd2b9-040d-462f-8579-bb9f3821fc14 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.216 239460 DEBUG oslo_concurrency.lockutils [req-cd8cca12-b15f-45c1-9158-0f8f06b38105 req-4d9cd2b9-040d-462f-8579-bb9f3821fc14 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.217 239460 DEBUG oslo_concurrency.lockutils [req-cd8cca12-b15f-45c1-9158-0f8f06b38105 req-4d9cd2b9-040d-462f-8579-bb9f3821fc14 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.218 239460 DEBUG nova.compute.manager [req-cd8cca12-b15f-45c1-9158-0f8f06b38105 req-4d9cd2b9-040d-462f-8579-bb9f3821fc14 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] No waiting events found dispatching network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.218 239460 WARNING nova.compute.manager [req-cd8cca12-b15f-45c1-9158-0f8f06b38105 req-4d9cd2b9-040d-462f-8579-bb9f3821fc14 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received unexpected event network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c for instance with vm_state building and task_state spawning.#033[00m
Jan 29 12:36:37 np0005601226 systemd[1]: Started libpod-conmon-124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b.scope.
Jan 29 12:36:37 np0005601226 podman[275243]: 2026-01-29 17:36:37.169951226 +0000 UTC m=+0.017612579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:36:37 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:36:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d084bac78058b6e84057d1eb8c8915f5b9652c526d2a5011bf915d8c4cae2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d084bac78058b6e84057d1eb8c8915f5b9652c526d2a5011bf915d8c4cae2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d084bac78058b6e84057d1eb8c8915f5b9652c526d2a5011bf915d8c4cae2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:37 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d084bac78058b6e84057d1eb8c8915f5b9652c526d2a5011bf915d8c4cae2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:36:37 np0005601226 podman[275243]: 2026-01-29 17:36:37.29426212 +0000 UTC m=+0.141923543 container init 124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_fermat, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:36:37 np0005601226 podman[275243]: 2026-01-29 17:36:37.302840513 +0000 UTC m=+0.150501886 container start 124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_fermat, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:36:37 np0005601226 podman[275243]: 2026-01-29 17:36:37.311541309 +0000 UTC m=+0.159202742 container attach 124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.795 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.847 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.848 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708197.8478928, 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.848 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] VM Started (Lifecycle Event)#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.851 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.854 239460 INFO nova.virt.libvirt.driver [-] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Instance spawned successfully.#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.854 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.888 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.892 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.893 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.893 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.894 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.895 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.896 239460 DEBUG nova.virt.libvirt.driver [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.900 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:36:37 np0005601226 lvm[275343]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:36:37 np0005601226 lvm[275343]: VG ceph_vg0 finished
Jan 29 12:36:37 np0005601226 lvm[275345]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:36:37 np0005601226 lvm[275345]: VG ceph_vg1 finished
Jan 29 12:36:37 np0005601226 lvm[275347]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:36:37 np0005601226 lvm[275347]: VG ceph_vg2 finished
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.954 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.954 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708197.8480256, 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.954 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.991 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.996 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708197.8502514, 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:36:37 np0005601226 nova_compute[239456]: 2026-01-29 17:36:37.996 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:36:38 np0005601226 nostalgic_fermat[275259]: {}
Jan 29 12:36:38 np0005601226 systemd[1]: libpod-124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b.scope: Deactivated successfully.
Jan 29 12:36:38 np0005601226 podman[275243]: 2026-01-29 17:36:38.033593854 +0000 UTC m=+0.881255197 container died 124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_fermat, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.067 239460 INFO nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Took 7.44 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.068 239460 DEBUG nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.077 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.080 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:36:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 3.9 KiB/s wr, 86 op/s
Jan 29 12:36:38 np0005601226 podman[275348]: 2026-01-29 17:36:38.17383616 +0000 UTC m=+0.206649430 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.180 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:36:38 np0005601226 systemd[1]: var-lib-containers-storage-overlay-38d084bac78058b6e84057d1eb8c8915f5b9652c526d2a5011bf915d8c4cae2b-merged.mount: Deactivated successfully.
Jan 29 12:36:38 np0005601226 podman[275243]: 2026-01-29 17:36:38.200799811 +0000 UTC m=+1.048461154 container remove 124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:36:38 np0005601226 systemd[1]: libpod-conmon-124730c1581388ab57715a5bad4e12164bc015bc996df650b1d59b3a12e5878b.scope: Deactivated successfully.
Jan 29 12:36:38 np0005601226 podman[275363]: 2026-01-29 17:36:38.236148101 +0000 UTC m=+0.222027006 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:36:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:36:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:36:38 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:36:38 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.345 239460 INFO nova.compute.manager [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Took 9.87 seconds to build instance.#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.363 239460 DEBUG oslo_concurrency.lockutils [None req-4b94d43a-4573-40ca-9e80-5875110e905d 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.635 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.636 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.636 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.637 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.637 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:38 np0005601226 nova_compute[239456]: 2026-01-29 17:36:38.955 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:36:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1581834873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.202 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:36:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:36:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e508 do_prune osdmap full prune enabled
Jan 29 12:36:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e509 e509: 3 total, 3 up, 3 in
Jan 29 12:36:39 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e509: 3 total, 3 up, 3 in
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.284 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.285 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.444 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.445 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4196MB free_disk=59.988149819895625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.446 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.446 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.547 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.548 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.548 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.569 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing inventories for resource provider 79259295-532c-4a51-8f50-027529735b0c _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.590 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating ProviderTree inventory for provider 79259295-532c-4a51-8f50-027529735b0c from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.591 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Updating inventory in ProviderTree for provider 79259295-532c-4a51-8f50-027529735b0c with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.607 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing aggregate associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.638 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Refreshing trait associations for resource provider 79259295-532c-4a51-8f50-027529735b0c, traits: HW_CPU_X86_SSE4A,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NODE,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_ABM,HW_CPU_X86_MMX,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE41,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_ISO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 29 12:36:39 np0005601226 nova_compute[239456]: 2026-01-29 17:36:39.675 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 25 KiB/s wr, 173 op/s
Jan 29 12:36:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:36:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169008651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:36:40 np0005601226 nova_compute[239456]: 2026-01-29 17:36:40.174 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:36:40 np0005601226 nova_compute[239456]: 2026-01-29 17:36:40.181 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:36:40 np0005601226 nova_compute[239456]: 2026-01-29 17:36:40.206 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:36:40 np0005601226 nova_compute[239456]: 2026-01-29 17:36:40.258 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:36:40 np0005601226 nova_compute[239456]: 2026-01-29 17:36:40.259 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:40.299 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:36:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:40.300 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:36:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:36:40.300 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:36:40
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'images', '.mgr', 'backups', 'default.rgw.control']
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:36:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:36:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e509 do_prune osdmap full prune enabled
Jan 29 12:36:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e510 e510: 3 total, 3 up, 3 in
Jan 29 12:36:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e510: 3 total, 3 up, 3 in
Jan 29 12:36:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e510 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e510 do_prune osdmap full prune enabled
Jan 29 12:36:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e511 e511: 3 total, 3 up, 3 in
Jan 29 12:36:41 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e511: 3 total, 3 up, 3 in
Jan 29 12:36:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 26 KiB/s wr, 138 op/s
Jan 29 12:36:42 np0005601226 nova_compute[239456]: 2026-01-29 17:36:42.260 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:42 np0005601226 nova_compute[239456]: 2026-01-29 17:36:42.261 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:42 np0005601226 nova_compute[239456]: 2026-01-29 17:36:42.829 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:36:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3167800317' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:36:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:36:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3167800317' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:36:43 np0005601226 nova_compute[239456]: 2026-01-29 17:36:43.321 239460 DEBUG nova.compute.manager [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-changed-da0c0c44-fc4a-4778-ad1a-08a01c5d459c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:36:43 np0005601226 nova_compute[239456]: 2026-01-29 17:36:43.321 239460 DEBUG nova.compute.manager [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Refreshing instance network info cache due to event network-changed-da0c0c44-fc4a-4778-ad1a-08a01c5d459c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:36:43 np0005601226 nova_compute[239456]: 2026-01-29 17:36:43.321 239460 DEBUG oslo_concurrency.lockutils [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:36:43 np0005601226 nova_compute[239456]: 2026-01-29 17:36:43.321 239460 DEBUG oslo_concurrency.lockutils [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:36:43 np0005601226 nova_compute[239456]: 2026-01-29 17:36:43.322 239460 DEBUG nova.network.neutron [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Refreshing network info cache for port da0c0c44-fc4a-4778-ad1a-08a01c5d459c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:36:43 np0005601226 nova_compute[239456]: 2026-01-29 17:36:43.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:44 np0005601226 nova_compute[239456]: 2026-01-29 17:36:44.004 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 27 KiB/s wr, 235 op/s
Jan 29 12:36:44 np0005601226 nova_compute[239456]: 2026-01-29 17:36:44.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:45 np0005601226 nova_compute[239456]: 2026-01-29 17:36:45.217 239460 DEBUG nova.network.neutron [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updated VIF entry in instance network info cache for port da0c0c44-fc4a-4778-ad1a-08a01c5d459c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:36:45 np0005601226 nova_compute[239456]: 2026-01-29 17:36:45.218 239460 DEBUG nova.network.neutron [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updating instance_info_cache with network_info: [{"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:36:45 np0005601226 nova_compute[239456]: 2026-01-29 17:36:45.237 239460 DEBUG oslo_concurrency.lockutils [req-ebf926c1-fd69-4ea8-91fd-3d301ddc865e req-369d6a6d-d2ee-4429-a671-ec5fd5fbf3c7 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:36:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.9 KiB/s wr, 122 op/s
Jan 29 12:36:46 np0005601226 nova_compute[239456]: 2026-01-29 17:36:46.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:46 np0005601226 nova_compute[239456]: 2026-01-29 17:36:46.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:36:46 np0005601226 nova_compute[239456]: 2026-01-29 17:36:46.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:36:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e511 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:46 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e511 do_prune osdmap full prune enabled
Jan 29 12:36:47 np0005601226 nova_compute[239456]: 2026-01-29 17:36:47.056 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:36:47 np0005601226 nova_compute[239456]: 2026-01-29 17:36:47.057 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:36:47 np0005601226 nova_compute[239456]: 2026-01-29 17:36:47.057 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:36:47 np0005601226 nova_compute[239456]: 2026-01-29 17:36:47.057 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:36:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 e512: 3 total, 3 up, 3 in
Jan 29 12:36:47 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e512: 3 total, 3 up, 3 in
Jan 29 12:36:47 np0005601226 nova_compute[239456]: 2026-01-29 17:36:47.862 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:47 np0005601226 nova_compute[239456]: 2026-01-29 17:36:47.997 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updating instance_info_cache with network_info: [{"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:36:48 np0005601226 nova_compute[239456]: 2026-01-29 17:36:48.013 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:36:48 np0005601226 nova_compute[239456]: 2026-01-29 17:36:48.014 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:36:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.9 KiB/s wr, 123 op/s
Jan 29 12:36:48 np0005601226 nova_compute[239456]: 2026-01-29 17:36:48.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:49 np0005601226 nova_compute[239456]: 2026-01-29 17:36:49.041 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.2 KiB/s wr, 106 op/s
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.54417397176846e-06 of space, bias 1.0, pg target 0.0013632521915305379 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029129858880782957 of space, bias 1.0, pg target 0.8738957664234887 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.753203583608236e-06 of space, bias 1.0, pg target 0.001425961075082471 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669408340290156 of space, bias 1.0, pg target 0.20008225020870468 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4534730607584414e-06 of space, bias 4.0, pg target 0.0017441676729101298 quantized to 16 (current 16)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:36:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:36:52 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:52Z|00070|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.13
Jan 29 12:36:52 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:52Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f2:40:a3 10.100.0.13
Jan 29 12:36:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 271 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 KiB/s wr, 88 op/s
Jan 29 12:36:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:52 np0005601226 nova_compute[239456]: 2026-01-29 17:36:52.892 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:54 np0005601226 nova_compute[239456]: 2026-01-29 17:36:54.087 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 279 MiB data, 606 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 820 KiB/s wr, 62 op/s
Jan 29 12:36:55 np0005601226 nova_compute[239456]: 2026-01-29 17:36:55.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:36:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 283 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 57 op/s
Jan 29 12:36:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:56Z|00072|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.13
Jan 29 12:36:56 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:56Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f2:40:a3 10.100.0.13
Jan 29 12:36:57 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:57Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:40:a3 10.100.0.13
Jan 29 12:36:57 np0005601226 ovn_controller[145556]: 2026-01-29T17:36:57Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:40:a3 10.100.0.13
Jan 29 12:36:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:36:57 np0005601226 nova_compute[239456]: 2026-01-29 17:36:57.898 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:36:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 283 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 52 op/s
Jan 29 12:36:59 np0005601226 nova_compute[239456]: 2026-01-29 17:36:59.090 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 287 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Jan 29 12:37:00 np0005601226 nova_compute[239456]: 2026-01-29 17:37:00.994 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 287 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 46 op/s
Jan 29 12:37:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:02 np0005601226 nova_compute[239456]: 2026-01-29 17:37:02.902 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:04 np0005601226 nova_compute[239456]: 2026-01-29 17:37:04.093 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 287 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 46 op/s
Jan 29 12:37:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 287 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 455 KiB/s rd, 698 KiB/s wr, 10 op/s
Jan 29 12:37:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:07 np0005601226 nova_compute[239456]: 2026-01-29 17:37:07.631 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:07 np0005601226 nova_compute[239456]: 2026-01-29 17:37:07.937 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 287 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 356 KiB/s wr, 2 op/s
Jan 29 12:37:08 np0005601226 podman[275478]: 2026-01-29 17:37:08.934589199 +0000 UTC m=+0.100335064 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 29 12:37:08 np0005601226 podman[275479]: 2026-01-29 17:37:08.939146352 +0000 UTC m=+0.104692222 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 29 12:37:09 np0005601226 nova_compute[239456]: 2026-01-29 17:37:09.127 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 291 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 701 KiB/s wr, 4 op/s
Jan 29 12:37:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:37:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:37:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:37:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:37:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:37:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:37:10 np0005601226 nova_compute[239456]: 2026-01-29 17:37:10.742 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 291 MiB data, 614 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 348 KiB/s wr, 2 op/s
Jan 29 12:37:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:12 np0005601226 nova_compute[239456]: 2026-01-29 17:37:12.964 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 295 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 442 KiB/s wr, 5 op/s
Jan 29 12:37:14 np0005601226 nova_compute[239456]: 2026-01-29 17:37:14.170 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:14.542 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:37:14 np0005601226 nova_compute[239456]: 2026-01-29 17:37:14.542 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:14 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:14.544 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.347 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.347 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.348 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.348 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.348 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.349 239460 INFO nova.compute.manager [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Terminating instance#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.351 239460 DEBUG nova.compute.manager [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:37:15 np0005601226 kernel: tapda0c0c44-fc (unregistering): left promiscuous mode
Jan 29 12:37:15 np0005601226 NetworkManager[49020]: <info>  [1769708235.4127] device (tapda0c0c44-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.418 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:15 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:15Z|00264|binding|INFO|Releasing lport da0c0c44-fc4a-4778-ad1a-08a01c5d459c from this chassis (sb_readonly=0)
Jan 29 12:37:15 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:15Z|00265|binding|INFO|Setting lport da0c0c44-fc4a-4778-ad1a-08a01c5d459c down in Southbound
Jan 29 12:37:15 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:15Z|00266|binding|INFO|Removing iface tapda0c0c44-fc ovn-installed in OVS
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.421 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.427 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:40:a3 10.100.0.13'], port_security=['fa:16:3e:f2:40:a3 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '59a43be4-0b8c-4ad3-abba-c60a6c6e9aae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9275d605-e314-4c83-a4e8-f4ba085f6358', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c3315c8b4c543a38f07ec0c509f03c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '27af6b88-dd81-456a-89a5-a6e9b903fd48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2a7d5cc-cff2-487b-9e34-0c3106da1b90, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=da0c0c44-fc4a-4778-ad1a-08a01c5d459c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.429 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.430 155625 INFO neutron.agent.ovn.metadata.agent [-] Port da0c0c44-fc4a-4778-ad1a-08a01c5d459c in datapath 9275d605-e314-4c83-a4e8-f4ba085f6358 unbound from our chassis#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.432 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9275d605-e314-4c83-a4e8-f4ba085f6358, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.433 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[415984e0-2f68-4e4b-aa8d-e442fba81058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.434 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 namespace which is not needed anymore#033[00m
Jan 29 12:37:15 np0005601226 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Jan 29 12:37:15 np0005601226 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 15.832s CPU time.
Jan 29 12:37:15 np0005601226 systemd-machined[207561]: Machine qemu-28-instance-0000001c terminated.
Jan 29 12:37:15 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [NOTICE]   (275088) : haproxy version is 2.8.14-c23fe91
Jan 29 12:37:15 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [NOTICE]   (275088) : path to executable is /usr/sbin/haproxy
Jan 29 12:37:15 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [WARNING]  (275088) : Exiting Master process...
Jan 29 12:37:15 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [WARNING]  (275088) : Exiting Master process...
Jan 29 12:37:15 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [ALERT]    (275088) : Current worker (275090) exited with code 143 (Terminated)
Jan 29 12:37:15 np0005601226 neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358[275070]: [WARNING]  (275088) : All workers exited. Exiting... (0)
Jan 29 12:37:15 np0005601226 systemd[1]: libpod-e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f.scope: Deactivated successfully.
Jan 29 12:37:15 np0005601226 podman[275547]: 2026-01-29 17:37:15.562459698 +0000 UTC m=+0.049669168 container died e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.581 239460 INFO nova.virt.libvirt.driver [-] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Instance destroyed successfully.#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.582 239460 DEBUG nova.objects.instance [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lazy-loading 'resources' on Instance uuid 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:37:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f-userdata-shm.mount: Deactivated successfully.
Jan 29 12:37:15 np0005601226 systemd[1]: var-lib-containers-storage-overlay-0f430a103e0d8489ca70427f6b885038724aa4c70bbd41d10e967b3aaac637d5-merged.mount: Deactivated successfully.
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.601 239460 DEBUG nova.virt.libvirt.vif [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:36:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-958531094',display_name='tempest-TestEncryptedCinderVolumes-server-958531094',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-958531094',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOWwFB+v1u4PWXh8S+wauEigc6l/mXsWkHrP4yUMWKETbc8s3+sIpwLb84UDBVfP1J1Q2qVa0piFJnATY3aZmmPNeYGKVTqN4zZ540CODMnFZL0G2v6B5/DzZoCBhdagGw==',key_name='tempest-TestEncryptedCinderVolumes-1911257081',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:36:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c3315c8b4c543a38f07ec0c509f03c1',ramdisk_id='',reservation_id='r-dl3n7k58',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-595928636',owner_user_name='tempest-TestEncryptedCinderVolumes-595928636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:36:38Z,user_data=None,user_id='90bbb3ba09534f74aedaab7650ed5ba4',uuid=59a43be4-0b8c-4ad3-abba-c60a6c6e9aae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.602 239460 DEBUG nova.network.os_vif_util [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converting VIF {"id": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "address": "fa:16:3e:f2:40:a3", "network": {"id": "9275d605-e314-4c83-a4e8-f4ba085f6358", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1160921638-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c3315c8b4c543a38f07ec0c509f03c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda0c0c44-fc", "ovs_interfaceid": "da0c0c44-fc4a-4778-ad1a-08a01c5d459c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.604 239460 DEBUG nova.network.os_vif_util [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=da0c0c44-fc4a-4778-ad1a-08a01c5d459c,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda0c0c44-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.604 239460 DEBUG os_vif [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=da0c0c44-fc4a-4778-ad1a-08a01c5d459c,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda0c0c44-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.606 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.606 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda0c0c44-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.608 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.610 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.612 239460 INFO os_vif [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:40:a3,bridge_name='br-int',has_traffic_filtering=True,id=da0c0c44-fc4a-4778-ad1a-08a01c5d459c,network=Network(9275d605-e314-4c83-a4e8-f4ba085f6358),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapda0c0c44-fc')#033[00m
Jan 29 12:37:15 np0005601226 podman[275547]: 2026-01-29 17:37:15.615546959 +0000 UTC m=+0.102756399 container cleanup e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 29 12:37:15 np0005601226 systemd[1]: libpod-conmon-e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f.scope: Deactivated successfully.
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.636 239460 DEBUG nova.compute.manager [req-66b25725-9b27-4b17-9df2-92dc0696440e req-3d40b611-de90-4ca4-8c9e-1cb8cfe9b16a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-vif-unplugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.637 239460 DEBUG oslo_concurrency.lockutils [req-66b25725-9b27-4b17-9df2-92dc0696440e req-3d40b611-de90-4ca4-8c9e-1cb8cfe9b16a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.637 239460 DEBUG oslo_concurrency.lockutils [req-66b25725-9b27-4b17-9df2-92dc0696440e req-3d40b611-de90-4ca4-8c9e-1cb8cfe9b16a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.638 239460 DEBUG oslo_concurrency.lockutils [req-66b25725-9b27-4b17-9df2-92dc0696440e req-3d40b611-de90-4ca4-8c9e-1cb8cfe9b16a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.638 239460 DEBUG nova.compute.manager [req-66b25725-9b27-4b17-9df2-92dc0696440e req-3d40b611-de90-4ca4-8c9e-1cb8cfe9b16a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] No waiting events found dispatching network-vif-unplugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.638 239460 DEBUG nova.compute.manager [req-66b25725-9b27-4b17-9df2-92dc0696440e req-3d40b611-de90-4ca4-8c9e-1cb8cfe9b16a 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-vif-unplugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:37:15 np0005601226 podman[275589]: 2026-01-29 17:37:15.690587246 +0000 UTC m=+0.056313599 container remove e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.695 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[953ccfc5-6c88-4a20-a61d-ebe68c4249d9]: (4, ('Thu Jan 29 05:37:15 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 (e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f)\ne0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f\nThu Jan 29 05:37:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 (e0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f)\ne0f6d1326e95b16ec19cbef92ac45572821eb8c1ff5a8d41acbed2943251023f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.698 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[fde94ed7-424c-4ea5-b5dd-b7924b654a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.699 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9275d605-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.701 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:15 np0005601226 kernel: tap9275d605-e0: left promiscuous mode
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.708 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.711 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[467cdd0d-9476-48da-8d2e-76cacc0df722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.722 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[75954d96-22ab-4642-aefb-a3199d539877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.724 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[96a57ccb-32e3-4e99-aa43-ce5da6408cd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.740 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[da760649-b9b1-48dc-8286-4a7a03108006]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554835, 'reachable_time': 19802, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275622, 'error': None, 'target': 'ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.743 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9275d605-e314-4c83-a4e8-f4ba085f6358 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:37:15 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:15.743 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[0928a9a4-c47c-4229-b777-fd2efb84c80a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:15 np0005601226 systemd[1]: run-netns-ovnmeta\x2d9275d605\x2de314\x2d4c83\x2da4e8\x2df4ba085f6358.mount: Deactivated successfully.
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.782 239460 INFO nova.virt.libvirt.driver [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Deleting instance files /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_del#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.783 239460 INFO nova.virt.libvirt.driver [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Deletion of /var/lib/nova/instances/59a43be4-0b8c-4ad3-abba-c60a6c6e9aae_del complete#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.851 239460 INFO nova.compute.manager [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Took 0.50 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.852 239460 DEBUG oslo.service.loopingcall [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.853 239460 DEBUG nova.compute.manager [-] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:37:15 np0005601226 nova_compute[239456]: 2026-01-29 17:37:15.853 239460 DEBUG nova.network.neutron [-] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:37:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 295 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 442 KiB/s wr, 5 op/s
Jan 29 12:37:16 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:16.547 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:16 np0005601226 nova_compute[239456]: 2026-01-29 17:37:16.837 239460 DEBUG nova.network.neutron [-] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:37:16 np0005601226 nova_compute[239456]: 2026-01-29 17:37:16.854 239460 INFO nova.compute.manager [-] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Took 1.00 seconds to deallocate network for instance.#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.138 239460 INFO nova.compute.manager [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Took 0.28 seconds to detach 1 volumes for instance.#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.194 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.195 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.257 239460 DEBUG oslo_concurrency.processutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.695 239460 DEBUG nova.compute.manager [req-740c1497-02f0-4796-a3e9-18ecde9a3c44 req-406e38fb-59bc-498f-b68b-61e668d977d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.696 239460 DEBUG oslo_concurrency.lockutils [req-740c1497-02f0-4796-a3e9-18ecde9a3c44 req-406e38fb-59bc-498f-b68b-61e668d977d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.696 239460 DEBUG oslo_concurrency.lockutils [req-740c1497-02f0-4796-a3e9-18ecde9a3c44 req-406e38fb-59bc-498f-b68b-61e668d977d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.696 239460 DEBUG oslo_concurrency.lockutils [req-740c1497-02f0-4796-a3e9-18ecde9a3c44 req-406e38fb-59bc-498f-b68b-61e668d977d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.696 239460 DEBUG nova.compute.manager [req-740c1497-02f0-4796-a3e9-18ecde9a3c44 req-406e38fb-59bc-498f-b68b-61e668d977d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] No waiting events found dispatching network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.697 239460 WARNING nova.compute.manager [req-740c1497-02f0-4796-a3e9-18ecde9a3c44 req-406e38fb-59bc-498f-b68b-61e668d977d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received unexpected event network-vif-plugged-da0c0c44-fc4a-4778-ad1a-08a01c5d459c for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.697 239460 DEBUG nova.compute.manager [req-740c1497-02f0-4796-a3e9-18ecde9a3c44 req-406e38fb-59bc-498f-b68b-61e668d977d8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Received event network-vif-deleted-da0c0c44-fc4a-4778-ad1a-08a01c5d459c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:37:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:37:17 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097932964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.796 239460 DEBUG oslo_concurrency.processutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.801 239460 DEBUG nova.compute.provider_tree [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.817 239460 DEBUG nova.scheduler.client.report [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.835 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.853 239460 INFO nova.scheduler.client.report [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Deleted allocations for instance 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.912 239460 DEBUG oslo_concurrency.lockutils [None req-3fe0b3f8-76b8-46b5-8969-ba10cec41db3 90bbb3ba09534f74aedaab7650ed5ba4 9c3315c8b4c543a38f07ec0c509f03c1 - - default default] Lock "59a43be4-0b8c-4ad3-abba-c60a6c6e9aae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:17 np0005601226 nova_compute[239456]: 2026-01-29 17:37:17.966 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 295 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 439 KiB/s wr, 9 op/s
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.575 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.576 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.593 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.656 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.657 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.665 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.666 239460 INFO nova.compute.claims [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 29 12:37:19 np0005601226 nova_compute[239456]: 2026-01-29 17:37:19.773 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:37:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1081627810' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:37:19 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:37:19 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1081627810' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:37:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 295 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 440 KiB/s wr, 23 op/s
Jan 29 12:37:20 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:37:20 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701711562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.346 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.353 239460 DEBUG nova.compute.provider_tree [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.374 239460 DEBUG nova.scheduler.client.report [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.397 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.398 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.446 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.447 239460 DEBUG nova.network.neutron [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.465 239460 INFO nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.484 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.590 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.592 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.592 239460 INFO nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Creating image(s)#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.624 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.660 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.694 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.698 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.716 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.759 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.760 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "619359f3f53a439a222a6a2a89408201f4394e5d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.760 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.761 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "619359f3f53a439a222a6a2a89408201f4394e5d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.785 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:37:20 np0005601226 nova_compute[239456]: 2026-01-29 17:37:20.789 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 56cf922f-31d1-4f48-8716-abdd2671978f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.129 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/619359f3f53a439a222a6a2a89408201f4394e5d 56cf922f-31d1-4f48-8716-abdd2671978f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.203 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] resizing rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.245 239460 DEBUG nova.policy [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a90a68eb18ea403bba234ab459af3366', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '33d35fb946054d9db9235dbdd0d016df', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.311 239460 DEBUG nova.objects.instance [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'migration_context' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.330 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.331 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Ensure instance console log exists: /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.332 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.332 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:21 np0005601226 nova_compute[239456]: 2026-01-29 17:37:21.332 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:22 np0005601226 nova_compute[239456]: 2026-01-29 17:37:22.076 239460 DEBUG nova.network.neutron [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Successfully created port: 4e6145b0-826c-49b0-8b2a-28d655d14899 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 29 12:37:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 295 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 95 KiB/s wr, 21 op/s
Jan 29 12:37:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:37:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444501629' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:37:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:37:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1444501629' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:37:22 np0005601226 nova_compute[239456]: 2026-01-29 17:37:22.925 239460 DEBUG nova.network.neutron [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Successfully updated port: 4e6145b0-826c-49b0-8b2a-28d655d14899 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 29 12:37:22 np0005601226 nova_compute[239456]: 2026-01-29 17:37:22.940 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:37:22 np0005601226 nova_compute[239456]: 2026-01-29 17:37:22.940 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquired lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:37:22 np0005601226 nova_compute[239456]: 2026-01-29 17:37:22.941 239460 DEBUG nova.network.neutron [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 29 12:37:22 np0005601226 nova_compute[239456]: 2026-01-29 17:37:22.968 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:23 np0005601226 nova_compute[239456]: 2026-01-29 17:37:23.024 239460 DEBUG nova.compute.manager [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-changed-4e6145b0-826c-49b0-8b2a-28d655d14899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:37:23 np0005601226 nova_compute[239456]: 2026-01-29 17:37:23.025 239460 DEBUG nova.compute.manager [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Refreshing instance network info cache due to event network-changed-4e6145b0-826c-49b0-8b2a-28d655d14899. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:37:23 np0005601226 nova_compute[239456]: 2026-01-29 17:37:23.026 239460 DEBUG oslo_concurrency.lockutils [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:37:23 np0005601226 nova_compute[239456]: 2026-01-29 17:37:23.084 239460 DEBUG nova.network.neutron [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 29 12:37:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 309 MiB data, 616 MiB used, 59 GiB / 60 GiB avail; 596 KiB/s rd, 930 KiB/s wr, 68 op/s
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.168 239460 DEBUG nova.network.neutron [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating instance_info_cache with network_info: [{"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.184 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Releasing lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.185 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Instance network_info: |[{"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.185 239460 DEBUG oslo_concurrency.lockutils [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.186 239460 DEBUG nova.network.neutron [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Refreshing network info cache for port 4e6145b0-826c-49b0-8b2a-28d655d14899 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.191 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Start _get_guest_xml network_info=[{"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_options': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'guest_format': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'boot_index': 0, 'encrypted': False, 'image_id': '71879218-5462-43bb-aba6-6319695b24fd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.197 239460 WARNING nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.202 239460 DEBUG nova.virt.libvirt.host [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.203 239460 DEBUG nova.virt.libvirt.host [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.207 239460 DEBUG nova.virt.libvirt.host [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.207 239460 DEBUG nova.virt.libvirt.host [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.208 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.209 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-29T17:13:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='e367d44e-e23e-4b8e-90d1-56d09c8403b2',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-29T17:13:26Z,direct_url=<?>,disk_format='qcow2',id=71879218-5462-43bb-aba6-6319695b24fd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='218d87653c0f4776a3f1900d36945229',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-29T17:13:29Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.210 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.210 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.210 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.211 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.211 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.212 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.212 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.213 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.213 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.213 239460 DEBUG nova.virt.hardware [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.218 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:37:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024744382' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.805 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.833 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:37:24 np0005601226 nova_compute[239456]: 2026-01-29 17:37:24.838 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:37:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83035893' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.359 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.362 239460 DEBUG nova.virt.libvirt.vif [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1005365471',display_name='tempest-SnapshotDataIntegrityTests-server-1005365471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1005365471',id=29,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgMFMfIktUCCKxYvS5fnRhCqfW6HEpOoqw9YPS+GQOTbjTJO0kG7z43BrWxUwymnJBw2tIDGs6YXdt13jdNV8JUGkOTcJ0PN1w+6Dxdc2BghZn+xW+KepwYNzkwsLtcUw==',key_name='tempest-keypair-1811024843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33d35fb946054d9db9235dbdd0d016df',ramdisk_id='',reservation_id='r-e8092dzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-564071566',owner_user_name='tempest-SnapshotDataIntegrityTests-564071566-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:37:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a90a68eb18ea403bba234ab459af3366',uuid=56cf922f-31d1-4f48-8716-abdd2671978f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.362 239460 DEBUG nova.network.os_vif_util [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Converting VIF {"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.364 239460 DEBUG nova.network.os_vif_util [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:15:aa,bridge_name='br-int',has_traffic_filtering=True,id=4e6145b0-826c-49b0-8b2a-28d655d14899,network=Network(35a25c0c-d0e7-4163-9f2f-f825549dd56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e6145b0-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.365 239460 DEBUG nova.objects.instance [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'pci_devices' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.390 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] End _get_guest_xml xml=<domain type="kvm">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <uuid>56cf922f-31d1-4f48-8716-abdd2671978f</uuid>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <name>instance-0000001d</name>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <memory>131072</memory>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <vcpu>1</vcpu>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <metadata>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <nova:name>tempest-SnapshotDataIntegrityTests-server-1005365471</nova:name>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <nova:creationTime>2026-01-29 17:37:24</nova:creationTime>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <nova:flavor name="m1.nano">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:memory>128</nova:memory>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:disk>1</nova:disk>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:swap>0</nova:swap>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:ephemeral>0</nova:ephemeral>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:vcpus>1</nova:vcpus>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      </nova:flavor>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <nova:owner>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:user uuid="a90a68eb18ea403bba234ab459af3366">tempest-SnapshotDataIntegrityTests-564071566-project-member</nova:user>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:project uuid="33d35fb946054d9db9235dbdd0d016df">tempest-SnapshotDataIntegrityTests-564071566</nova:project>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      </nova:owner>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <nova:root type="image" uuid="71879218-5462-43bb-aba6-6319695b24fd"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <nova:ports>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <nova:port uuid="4e6145b0-826c-49b0-8b2a-28d655d14899">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        </nova:port>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      </nova:ports>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </nova:instance>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  </metadata>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <sysinfo type="smbios">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <system>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <entry name="manufacturer">RDO</entry>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <entry name="product">OpenStack Compute</entry>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <entry name="serial">56cf922f-31d1-4f48-8716-abdd2671978f</entry>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <entry name="uuid">56cf922f-31d1-4f48-8716-abdd2671978f</entry>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <entry name="family">Virtual Machine</entry>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </system>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  </sysinfo>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <os>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <boot dev="hd"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <smbios mode="sysinfo"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  </os>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <features>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <acpi/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <apic/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <vmcoreinfo/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  </features>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <clock offset="utc">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <timer name="pit" tickpolicy="delay"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <timer name="hpet" present="no"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  </clock>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <cpu mode="host-model" match="exact">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <topology sockets="1" cores="1" threads="1"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  </cpu>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  <devices>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <disk type="network" device="disk">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/56cf922f-31d1-4f48-8716-abdd2671978f_disk">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <target dev="vda" bus="virtio"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <disk type="network" device="cdrom">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <driver type="raw" cache="none"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <source protocol="rbd" name="vms/56cf922f-31d1-4f48-8716-abdd2671978f_disk.config">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <host name="192.168.122.100" port="6789"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      </source>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <auth username="openstack">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:        <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      </auth>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <target dev="sda" bus="sata"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </disk>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <interface type="ethernet">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <mac address="fa:16:3e:e8:15:aa"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <driver name="vhost" rx_queue_size="512"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <mtu size="1442"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <target dev="tap4e6145b0-82"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </interface>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <serial type="pty">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <log file="/var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/console.log" append="off"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </serial>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <video>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <model type="virtio"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </video>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <input type="tablet" bus="usb"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <rng model="virtio">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <backend model="random">/dev/urandom</backend>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </rng>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="pci" model="pcie-root-port"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <controller type="usb" index="0"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    <memballoon model="virtio">
Jan 29 12:37:25 np0005601226 nova_compute[239456]:      <stats period="10"/>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:    </memballoon>
Jan 29 12:37:25 np0005601226 nova_compute[239456]:  </devices>
Jan 29 12:37:25 np0005601226 nova_compute[239456]: </domain>
Jan 29 12:37:25 np0005601226 nova_compute[239456]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.391 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Preparing to wait for external event network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.392 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.392 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.393 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.394 239460 DEBUG nova.virt.libvirt.vif [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-29T17:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1005365471',display_name='tempest-SnapshotDataIntegrityTests-server-1005365471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1005365471',id=29,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgMFMfIktUCCKxYvS5fnRhCqfW6HEpOoqw9YPS+GQOTbjTJO0kG7z43BrWxUwymnJBw2tIDGs6YXdt13jdNV8JUGkOTcJ0PN1w+6Dxdc2BghZn+xW+KepwYNzkwsLtcUw==',key_name='tempest-keypair-1811024843',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='33d35fb946054d9db9235dbdd0d016df',ramdisk_id='',reservation_id='r-e8092dzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-564071566',owner_user_name='tempest-SnapshotDataIntegrityTests-564071566-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-29T17:37:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a90a68eb18ea403bba234ab459af3366',uuid=56cf922f-31d1-4f48-8716-abdd2671978f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.394 239460 DEBUG nova.network.os_vif_util [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Converting VIF {"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.395 239460 DEBUG nova.network.os_vif_util [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:15:aa,bridge_name='br-int',has_traffic_filtering=True,id=4e6145b0-826c-49b0-8b2a-28d655d14899,network=Network(35a25c0c-d0e7-4163-9f2f-f825549dd56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e6145b0-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.396 239460 DEBUG os_vif [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:15:aa,bridge_name='br-int',has_traffic_filtering=True,id=4e6145b0-826c-49b0-8b2a-28d655d14899,network=Network(35a25c0c-d0e7-4163-9f2f-f825549dd56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e6145b0-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.397 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.397 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.398 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.401 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.402 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e6145b0-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.402 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4e6145b0-82, col_values=(('external_ids', {'iface-id': '4e6145b0-826c-49b0-8b2a-28d655d14899', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:15:aa', 'vm-uuid': '56cf922f-31d1-4f48-8716-abdd2671978f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.404 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:25 np0005601226 NetworkManager[49020]: <info>  [1769708245.4057] manager: (tap4e6145b0-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.407 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.410 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.413 239460 INFO os_vif [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:15:aa,bridge_name='br-int',has_traffic_filtering=True,id=4e6145b0-826c-49b0-8b2a-28d655d14899,network=Network(35a25c0c-d0e7-4163-9f2f-f825549dd56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e6145b0-82')#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.485 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.485 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.486 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No VIF found with MAC fa:16:3e:e8:15:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.487 239460 INFO nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Using config drive#033[00m
Jan 29 12:37:25 np0005601226 nova_compute[239456]: 2026-01-29 17:37:25.514 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:37:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 317 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 257 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.232 239460 INFO nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Creating config drive at /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/disk.config#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.237 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkrtbu8i3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.364 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkrtbu8i3" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.399 239460 DEBUG nova.storage.rbd_utils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] rbd image 56cf922f-31d1-4f48-8716-abdd2671978f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.403 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/disk.config 56cf922f-31d1-4f48-8716-abdd2671978f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.631 239460 DEBUG oslo_concurrency.processutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/disk.config 56cf922f-31d1-4f48-8716-abdd2671978f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.632 239460 INFO nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Deleting local config drive /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f/disk.config because it was imported into RBD.#033[00m
Jan 29 12:37:27 np0005601226 kernel: tap4e6145b0-82: entered promiscuous mode
Jan 29 12:37:27 np0005601226 NetworkManager[49020]: <info>  [1769708247.6846] manager: (tap4e6145b0-82): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.684 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:27Z|00267|binding|INFO|Claiming lport 4e6145b0-826c-49b0-8b2a-28d655d14899 for this chassis.
Jan 29 12:37:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:27Z|00268|binding|INFO|4e6145b0-826c-49b0-8b2a-28d655d14899: Claiming fa:16:3e:e8:15:aa 10.100.0.6
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.694 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:15:aa 10.100.0.6'], port_security=['fa:16:3e:e8:15:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56cf922f-31d1-4f48-8716-abdd2671978f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33d35fb946054d9db9235dbdd0d016df', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7bca6414-cee1-409a-86e7-358a99d3081b 8e0ce9cf-0c46-4c00-a275-5a6d2fadcaed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41b160a0-bb2b-496f-b795-108b47495676, chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=4e6145b0-826c-49b0-8b2a-28d655d14899) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.697 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 4e6145b0-826c-49b0-8b2a-28d655d14899 in datapath 35a25c0c-d0e7-4163-9f2f-f825549dd56b bound to our chassis#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.699 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:27Z|00269|binding|INFO|Setting lport 4e6145b0-826c-49b0-8b2a-28d655d14899 ovn-installed in OVS
Jan 29 12:37:27 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:27Z|00270|binding|INFO|Setting lport 4e6145b0-826c-49b0-8b2a-28d655d14899 up in Southbound
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.702 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.701 155625 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a25c0c-d0e7-4163-9f2f-f825549dd56b#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.707 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.714 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d236eb37-4abe-4361-994a-e687f098b12b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.715 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35a25c0c-d1 in ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.717 246354 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35a25c0c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.717 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[597b2131-424f-45e1-9a68-279e0cb1f843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.718 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc8392d-9f33-4b98-9ff5-5a89a1a165a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 systemd-machined[207561]: New machine qemu-29-instance-0000001d.
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.728 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[8eb98a27-4870-40f1-9576-f51a6ccd34f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Jan 29 12:37:27 np0005601226 systemd-udevd[275974]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.779 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[cad55454-d34e-4f91-9a43-193e1a02f973]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 NetworkManager[49020]: <info>  [1769708247.7861] device (tap4e6145b0-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 29 12:37:27 np0005601226 NetworkManager[49020]: <info>  [1769708247.7877] device (tap4e6145b0-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.814 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[43230fcb-7bfe-424a-b13c-c1eb7140e847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.821 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[888a0096-faf0-42e1-92ea-f2ebd63f0536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 NetworkManager[49020]: <info>  [1769708247.8230] manager: (tap35a25c0c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Jan 29 12:37:27 np0005601226 systemd-udevd[275976]: Network interface NamePolicy= disabled on kernel command line.
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.850 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f10809-f02c-48c2-ab1c-e8c540c2555e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.852 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[91d6a80d-f755-46d1-93de-b61fe12aef66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 NetworkManager[49020]: <info>  [1769708247.8756] device (tap35a25c0c-d0): carrier: link connected
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.881 246674 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1a6f1c-b831-4e98-a03e-ee2e917a825f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.902 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[a806f88b-1d46-4022-af6d-2c6929fb1a33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a25c0c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:53:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560137, 'reachable_time': 30812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276004, 'error': None, 'target': 'ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.921 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[75fd94a5-40fb-41a7-a563-d97345f03d4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:53d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560137, 'tstamp': 560137}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276005, 'error': None, 'target': 'ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.938 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d19bb4-376c-432d-93e8-b948bc4eb4a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a25c0c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:53:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560137, 'reachable_time': 30812, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276006, 'error': None, 'target': 'ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:27.966 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e0742e-6e97-4df7-97f1-3892e2782763]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:27 np0005601226 nova_compute[239456]: 2026-01-29 17:37:27.970 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.012 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[472b3cfa-e671-4b94-b997-d377c471a623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.014 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a25c0c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.015 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.015 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a25c0c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.017 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:28 np0005601226 NetworkManager[49020]: <info>  [1769708248.0190] manager: (tap35a25c0c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Jan 29 12:37:28 np0005601226 kernel: tap35a25c0c-d0: entered promiscuous mode
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.021 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.022 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a25c0c-d0, col_values=(('external_ids', {'iface-id': 'f375cf42-7216-4fc1-882e-3f57ebe4ca51'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.023 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:28 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:28Z|00271|binding|INFO|Releasing lport f375cf42-7216-4fc1-882e-3f57ebe4ca51 from this chassis (sb_readonly=0)
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.034 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.035 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.036 155625 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35a25c0c-d0e7-4163-9f2f-f825549dd56b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35a25c0c-d0e7-4163-9f2f-f825549dd56b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.037 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d98387e3-f8d0-4ad9-aa68-2072c94bc130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.038 155625 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: global
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    log         /dev/log local0 debug
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    log-tag     haproxy-metadata-proxy-35a25c0c-d0e7-4163-9f2f-f825549dd56b
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    user        root
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    group       root
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    maxconn     1024
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    pidfile     /var/lib/neutron/external/pids/35a25c0c-d0e7-4163-9f2f-f825549dd56b.pid.haproxy
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    daemon
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: defaults
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    log global
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    mode http
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    option httplog
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    option dontlognull
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    option http-server-close
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    option forwardfor
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    retries                 3
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    timeout http-request    30s
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    timeout connect         30s
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    timeout client          32s
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    timeout server          32s
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    timeout http-keep-alive 30s
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: listen listener
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    bind 169.254.169.254:80
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    server metadata /var/lib/neutron/metadata_proxy
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]:    http-request add-header X-OVN-Network-ID 35a25c0c-d0e7-4163-9f2f-f825549dd56b
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 29 12:37:28 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:28.038 155625 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'env', 'PROCESS_TAG=haproxy-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35a25c0c-d0e7-4163-9f2f-f825549dd56b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 29 12:37:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 317 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.182 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708248.181705, 56cf922f-31d1-4f48-8716-abdd2671978f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.182 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] VM Started (Lifecycle Event)#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.216 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.222 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708248.1856146, 56cf922f-31d1-4f48-8716-abdd2671978f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.223 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] VM Paused (Lifecycle Event)#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.263 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.268 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.281 239460 DEBUG nova.network.neutron [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updated VIF entry in instance network info cache for port 4e6145b0-826c-49b0-8b2a-28d655d14899. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.282 239460 DEBUG nova.network.neutron [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating instance_info_cache with network_info: [{"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.293 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.300 239460 DEBUG oslo_concurrency.lockutils [req-f8fb1240-fa20-4a96-ac8d-15d181499f2c req-39474824-9212-4043-b6eb-4c131972c105 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:37:28 np0005601226 podman[276080]: 2026-01-29 17:37:28.413303809 +0000 UTC m=+0.089268824 container create f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:37:28 np0005601226 podman[276080]: 2026-01-29 17:37:28.360563787 +0000 UTC m=+0.036528862 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 29 12:37:28 np0005601226 systemd[1]: Started libpod-conmon-f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720.scope.
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.467 239460 DEBUG nova.compute.manager [req-1b4d00cd-4368-49d9-8298-730577019ba6 req-f13b36a7-0146-46ec-af22-a9ad6ae677ba 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.468 239460 DEBUG oslo_concurrency.lockutils [req-1b4d00cd-4368-49d9-8298-730577019ba6 req-f13b36a7-0146-46ec-af22-a9ad6ae677ba 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.469 239460 DEBUG oslo_concurrency.lockutils [req-1b4d00cd-4368-49d9-8298-730577019ba6 req-f13b36a7-0146-46ec-af22-a9ad6ae677ba 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.469 239460 DEBUG oslo_concurrency.lockutils [req-1b4d00cd-4368-49d9-8298-730577019ba6 req-f13b36a7-0146-46ec-af22-a9ad6ae677ba 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.470 239460 DEBUG nova.compute.manager [req-1b4d00cd-4368-49d9-8298-730577019ba6 req-f13b36a7-0146-46ec-af22-a9ad6ae677ba 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Processing event network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.471 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.477 239460 DEBUG nova.virt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Emitting event <LifecycleEvent: 1769708248.4766388, 56cf922f-31d1-4f48-8716-abdd2671978f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.478 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] VM Resumed (Lifecycle Event)#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.481 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.485 239460 INFO nova.virt.libvirt.driver [-] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Instance spawned successfully.#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.486 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 29 12:37:28 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:37:28 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/025b717dad1888cbc207048d42fce5801a03eae296b1e4b533d98e0d030fcfe8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.502 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.512 239460 DEBUG nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.519 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.520 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.521 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.522 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.523 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:37:28 np0005601226 podman[276080]: 2026-01-29 17:37:28.523673403 +0000 UTC m=+0.199638398 container init f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.524 239460 DEBUG nova.virt.libvirt.driver [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 29 12:37:28 np0005601226 podman[276080]: 2026-01-29 17:37:28.530486948 +0000 UTC m=+0.206451923 container start f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.532 239460 INFO nova.compute.manager [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 29 12:37:28 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [NOTICE]   (276100) : New worker (276102) forked
Jan 29 12:37:28 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [NOTICE]   (276100) : Loading success.
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.588 239460 INFO nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Took 8.00 seconds to spawn the instance on the hypervisor.#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.589 239460 DEBUG nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.681 239460 INFO nova.compute.manager [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Took 9.05 seconds to build instance.#033[00m
Jan 29 12:37:28 np0005601226 nova_compute[239456]: 2026-01-29 17:37:28.702 239460 DEBUG oslo_concurrency.lockutils [None req-8659abd6-0df4-4e08-b615-bb5fda003c97 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 317 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 366 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.405 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.543 239460 DEBUG nova.compute.manager [req-004ff09d-a7af-4c91-9544-6b0d7e914a08 req-c0508aed-4fdd-415b-9b47-d1af9fc93354 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.543 239460 DEBUG oslo_concurrency.lockutils [req-004ff09d-a7af-4c91-9544-6b0d7e914a08 req-c0508aed-4fdd-415b-9b47-d1af9fc93354 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.544 239460 DEBUG oslo_concurrency.lockutils [req-004ff09d-a7af-4c91-9544-6b0d7e914a08 req-c0508aed-4fdd-415b-9b47-d1af9fc93354 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.545 239460 DEBUG oslo_concurrency.lockutils [req-004ff09d-a7af-4c91-9544-6b0d7e914a08 req-c0508aed-4fdd-415b-9b47-d1af9fc93354 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.545 239460 DEBUG nova.compute.manager [req-004ff09d-a7af-4c91-9544-6b0d7e914a08 req-c0508aed-4fdd-415b-9b47-d1af9fc93354 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] No waiting events found dispatching network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.546 239460 WARNING nova.compute.manager [req-004ff09d-a7af-4c91-9544-6b0d7e914a08 req-c0508aed-4fdd-415b-9b47-d1af9fc93354 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received unexpected event network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 for instance with vm_state active and task_state None.#033[00m
Jan 29 12:37:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:37:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1863694494' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:37:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:37:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1863694494' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.580 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769708235.5787227, 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.580 239460 INFO nova.compute.manager [-] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:37:30 np0005601226 nova_compute[239456]: 2026-01-29 17:37:30.599 239460 DEBUG nova.compute.manager [None req-88288936-5eaf-4580-bc35-a21a5b257b4e - - - - - -] [instance: 59a43be4-0b8c-4ad3-abba-c60a6c6e9aae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:37:31 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:31Z|00272|binding|INFO|Releasing lport f375cf42-7216-4fc1-882e-3f57ebe4ca51 from this chassis (sb_readonly=0)
Jan 29 12:37:31 np0005601226 nova_compute[239456]: 2026-01-29 17:37:31.641 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:31 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:31Z|00273|binding|INFO|Releasing lport f375cf42-7216-4fc1-882e-3f57ebe4ca51 from this chassis (sb_readonly=0)
Jan 29 12:37:31 np0005601226 nova_compute[239456]: 2026-01-29 17:37:31.723 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:31 np0005601226 NetworkManager[49020]: <info>  [1769708251.9476] manager: (patch-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Jan 29 12:37:31 np0005601226 NetworkManager[49020]: <info>  [1769708251.9488] manager: (patch-br-int-to-provnet-87abb107-3ab5-4304-ac4c-e4e3c79e221f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 29 12:37:31 np0005601226 nova_compute[239456]: 2026-01-29 17:37:31.946 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:32 np0005601226 nova_compute[239456]: 2026-01-29 17:37:32.010 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:32 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:32Z|00274|binding|INFO|Releasing lport f375cf42-7216-4fc1-882e-3f57ebe4ca51 from this chassis (sb_readonly=0)
Jan 29 12:37:32 np0005601226 nova_compute[239456]: 2026-01-29 17:37:32.028 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 317 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 29 12:37:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:32 np0005601226 nova_compute[239456]: 2026-01-29 17:37:32.635 239460 DEBUG nova.compute.manager [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-changed-4e6145b0-826c-49b0-8b2a-28d655d14899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:37:32 np0005601226 nova_compute[239456]: 2026-01-29 17:37:32.636 239460 DEBUG nova.compute.manager [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Refreshing instance network info cache due to event network-changed-4e6145b0-826c-49b0-8b2a-28d655d14899. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 29 12:37:32 np0005601226 nova_compute[239456]: 2026-01-29 17:37:32.637 239460 DEBUG oslo_concurrency.lockutils [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:37:32 np0005601226 nova_compute[239456]: 2026-01-29 17:37:32.637 239460 DEBUG oslo_concurrency.lockutils [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquired lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:37:32 np0005601226 nova_compute[239456]: 2026-01-29 17:37:32.637 239460 DEBUG nova.network.neutron [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Refreshing network info cache for port 4e6145b0-826c-49b0-8b2a-28d655d14899 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 29 12:37:33 np0005601226 nova_compute[239456]: 2026-01-29 17:37:33.018 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:33 np0005601226 nova_compute[239456]: 2026-01-29 17:37:33.637 239460 DEBUG nova.network.neutron [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updated VIF entry in instance network info cache for port 4e6145b0-826c-49b0-8b2a-28d655d14899. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 29 12:37:33 np0005601226 nova_compute[239456]: 2026-01-29 17:37:33.638 239460 DEBUG nova.network.neutron [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating instance_info_cache with network_info: [{"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:37:33 np0005601226 nova_compute[239456]: 2026-01-29 17:37:33.659 239460 DEBUG oslo_concurrency.lockutils [req-574f9313-9d4b-46f2-80a7-e6980c635842 req-b67e47b0-87e0-4cbd-a13c-a7a8883072ce 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Releasing lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:37:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 317 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 29 12:37:34 np0005601226 nova_compute[239456]: 2026-01-29 17:37:34.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:34 np0005601226 nova_compute[239456]: 2026-01-29 17:37:34.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:37:35 np0005601226 nova_compute[239456]: 2026-01-29 17:37:35.408 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:35 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:35Z|00275|binding|INFO|Releasing lport f375cf42-7216-4fc1-882e-3f57ebe4ca51 from this chassis (sb_readonly=0)
Jan 29 12:37:35 np0005601226 nova_compute[239456]: 2026-01-29 17:37:35.659 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 317 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 995 KiB/s wr, 80 op/s
Jan 29 12:37:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:37 np0005601226 nova_compute[239456]: 2026-01-29 17:37:37.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:38 np0005601226 nova_compute[239456]: 2026-01-29 17:37:38.056 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 317 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:37:39 np0005601226 podman[276218]: 2026-01-29 17:37:39.306608927 +0000 UTC m=+0.079548640 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:37:39 np0005601226 podman[276219]: 2026-01-29 17:37:39.348675439 +0000 UTC m=+0.120203474 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 29 12:37:39 np0005601226 podman[276300]: 2026-01-29 17:37:39.591552329 +0000 UTC m=+0.049438312 container create 6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:37:39 np0005601226 nova_compute[239456]: 2026-01-29 17:37:39.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:39 np0005601226 nova_compute[239456]: 2026-01-29 17:37:39.626 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:39 np0005601226 nova_compute[239456]: 2026-01-29 17:37:39.627 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:39 np0005601226 nova_compute[239456]: 2026-01-29 17:37:39.627 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:39 np0005601226 nova_compute[239456]: 2026-01-29 17:37:39.627 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:37:39 np0005601226 nova_compute[239456]: 2026-01-29 17:37:39.628 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:39 np0005601226 systemd[1]: Started libpod-conmon-6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b.scope.
Jan 29 12:37:39 np0005601226 podman[276300]: 2026-01-29 17:37:39.569177972 +0000 UTC m=+0.027063925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:37:39 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:37:39 np0005601226 podman[276300]: 2026-01-29 17:37:39.702329496 +0000 UTC m=+0.160215539 container init 6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:37:39 np0005601226 podman[276300]: 2026-01-29 17:37:39.712089331 +0000 UTC m=+0.169975314 container start 6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 29 12:37:39 np0005601226 podman[276300]: 2026-01-29 17:37:39.716630034 +0000 UTC m=+0.174516077 container attach 6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:37:39 np0005601226 recursing_cray[276316]: 167 167
Jan 29 12:37:39 np0005601226 systemd[1]: libpod-6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b.scope: Deactivated successfully.
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:37:39 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:37:39 np0005601226 podman[276300]: 2026-01-29 17:37:39.722662458 +0000 UTC m=+0.180548431 container died 6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:37:39 np0005601226 systemd[1]: var-lib-containers-storage-overlay-b65e1deedc68b3717691972e5bec0088a8c3705ef4d0f360b25736686b6b822e-merged.mount: Deactivated successfully.
Jan 29 12:37:39 np0005601226 podman[276300]: 2026-01-29 17:37:39.781172446 +0000 UTC m=+0.239058419 container remove 6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cray, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:37:39 np0005601226 systemd[1]: libpod-conmon-6c4300bf16d21c0d206fdbe9fe5e6ef366566eeec89b3784f5629331f02b549b.scope: Deactivated successfully.
Jan 29 12:37:39 np0005601226 podman[276359]: 2026-01-29 17:37:39.96925268 +0000 UTC m=+0.058662783 container create 17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:37:40 np0005601226 systemd[1]: Started libpod-conmon-17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c.scope.
Jan 29 12:37:40 np0005601226 podman[276359]: 2026-01-29 17:37:39.945067864 +0000 UTC m=+0.034478017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:37:40 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:37:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae1135da3aebbf60c3ceb601e97811c08a2043b5d312715d078d694a1853d48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae1135da3aebbf60c3ceb601e97811c08a2043b5d312715d078d694a1853d48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae1135da3aebbf60c3ceb601e97811c08a2043b5d312715d078d694a1853d48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae1135da3aebbf60c3ceb601e97811c08a2043b5d312715d078d694a1853d48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:40 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae1135da3aebbf60c3ceb601e97811c08a2043b5d312715d078d694a1853d48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:40 np0005601226 podman[276359]: 2026-01-29 17:37:40.075114603 +0000 UTC m=+0.164524706 container init 17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:37:40 np0005601226 podman[276359]: 2026-01-29 17:37:40.083545152 +0000 UTC m=+0.172955215 container start 17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:37:40 np0005601226 podman[276359]: 2026-01-29 17:37:40.086702947 +0000 UTC m=+0.176113050 container attach 17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 325 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 669 KiB/s wr, 77 op/s
Jan 29 12:37:40 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:37:40 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271506654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.243 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:40.300 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:40.300 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:37:40.301 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.330 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.331 239460 DEBUG nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.457 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:40Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:15:aa 10.100.0.6
Jan 29 12:37:40 np0005601226 ovn_controller[145556]: 2026-01-29T17:37:40Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:15:aa 10.100.0.6
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.543 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.544 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3975MB free_disk=59.9672302370891GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.545 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.545 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:37:40 np0005601226 youthful_jones[276376]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:37:40 np0005601226 youthful_jones[276376]: --> All data devices are unavailable
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:37:40
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', '.mgr', 'volumes', 'images', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log']
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:37:40 np0005601226 systemd[1]: libpod-17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c.scope: Deactivated successfully.
Jan 29 12:37:40 np0005601226 podman[276359]: 2026-01-29 17:37:40.651513725 +0000 UTC m=+0.740923858 container died 17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.656 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Instance 56cf922f-31d1-4f48-8716-abdd2671978f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.656 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.657 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:37:40 np0005601226 nova_compute[239456]: 2026-01-29 17:37:40.692 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:40 np0005601226 systemd[1]: var-lib-containers-storage-overlay-8ae1135da3aebbf60c3ceb601e97811c08a2043b5d312715d078d694a1853d48-merged.mount: Deactivated successfully.
Jan 29 12:37:40 np0005601226 podman[276359]: 2026-01-29 17:37:40.856786596 +0000 UTC m=+0.946196689 container remove 17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 29 12:37:40 np0005601226 systemd[1]: libpod-conmon-17c7b2a94f4eab2598a8da4fe79f2fb37fe4c929ea239239de16f5736cfeb78c.scope: Deactivated successfully.
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:37:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:37:41 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:37:41 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563583063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:37:41 np0005601226 nova_compute[239456]: 2026-01-29 17:37:41.275 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:41 np0005601226 nova_compute[239456]: 2026-01-29 17:37:41.282 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:37:41 np0005601226 nova_compute[239456]: 2026-01-29 17:37:41.301 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:37:41 np0005601226 nova_compute[239456]: 2026-01-29 17:37:41.323 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:37:41 np0005601226 nova_compute[239456]: 2026-01-29 17:37:41.324 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:41 np0005601226 podman[276494]: 2026-01-29 17:37:41.380686773 +0000 UTC m=+0.070676448 container create a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 12:37:41 np0005601226 systemd[1]: Started libpod-conmon-a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd.scope.
Jan 29 12:37:41 np0005601226 podman[276494]: 2026-01-29 17:37:41.35327887 +0000 UTC m=+0.043268565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:37:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:37:41 np0005601226 podman[276494]: 2026-01-29 17:37:41.462448942 +0000 UTC m=+0.152438667 container init a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:37:41 np0005601226 podman[276494]: 2026-01-29 17:37:41.47048607 +0000 UTC m=+0.160475735 container start a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:37:41 np0005601226 podman[276494]: 2026-01-29 17:37:41.474319155 +0000 UTC m=+0.164308820 container attach a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hellman, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:37:41 np0005601226 nice_hellman[276510]: 167 167
Jan 29 12:37:41 np0005601226 systemd[1]: libpod-a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd.scope: Deactivated successfully.
Jan 29 12:37:41 np0005601226 podman[276494]: 2026-01-29 17:37:41.476166675 +0000 UTC m=+0.166156340 container died a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:37:41 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2413d5e695439424c3bc52003a5754757989b36f62bf37384853f1066ad57a08-merged.mount: Deactivated successfully.
Jan 29 12:37:41 np0005601226 podman[276494]: 2026-01-29 17:37:41.527284773 +0000 UTC m=+0.217274438 container remove a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_hellman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:37:41 np0005601226 systemd[1]: libpod-conmon-a5c49231e2dd21242eded59e191947faeaa29602eb167cd8e87268e9800d8ecd.scope: Deactivated successfully.
Jan 29 12:37:41 np0005601226 podman[276534]: 2026-01-29 17:37:41.706396884 +0000 UTC m=+0.056355402 container create 35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 29 12:37:41 np0005601226 systemd[1]: Started libpod-conmon-35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18.scope.
Jan 29 12:37:41 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:37:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6d761ddf0ddeee69ce6f6de5f6a071d9d382dc8858a2139e220d63b73d5f5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6d761ddf0ddeee69ce6f6de5f6a071d9d382dc8858a2139e220d63b73d5f5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6d761ddf0ddeee69ce6f6de5f6a071d9d382dc8858a2139e220d63b73d5f5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:41 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f6d761ddf0ddeee69ce6f6de5f6a071d9d382dc8858a2139e220d63b73d5f5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:41 np0005601226 podman[276534]: 2026-01-29 17:37:41.681824926 +0000 UTC m=+0.031783494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:37:41 np0005601226 podman[276534]: 2026-01-29 17:37:41.793148388 +0000 UTC m=+0.143106976 container init 35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:37:41 np0005601226 podman[276534]: 2026-01-29 17:37:41.808043101 +0000 UTC m=+0.158001609 container start 35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:37:41 np0005601226 podman[276534]: 2026-01-29 17:37:41.814537458 +0000 UTC m=+0.164495956 container attach 35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]: {
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:    "0": [
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:        {
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "devices": [
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "/dev/loop3"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            ],
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_name": "ceph_lv0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_size": "21470642176",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "name": "ceph_lv0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "tags": {
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cluster_name": "ceph",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.crush_device_class": "",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.encrypted": "0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.objectstore": "bluestore",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osd_id": "0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.type": "block",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.vdo": "0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.with_tpm": "0"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            },
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "type": "block",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "vg_name": "ceph_vg0"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:        }
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:    ],
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:    "1": [
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:        {
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "devices": [
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "/dev/loop4"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            ],
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_name": "ceph_lv1",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_size": "21470642176",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "name": "ceph_lv1",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "tags": {
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cluster_name": "ceph",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.crush_device_class": "",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.encrypted": "0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.objectstore": "bluestore",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osd_id": "1",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.type": "block",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.vdo": "0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.with_tpm": "0"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            },
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "type": "block",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "vg_name": "ceph_vg1"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:        }
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:    ],
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:    "2": [
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:        {
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "devices": [
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "/dev/loop5"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            ],
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_name": "ceph_lv2",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_size": "21470642176",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "name": "ceph_lv2",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "tags": {
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.cluster_name": "ceph",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.crush_device_class": "",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.encrypted": "0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.objectstore": "bluestore",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osd_id": "2",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.type": "block",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.vdo": "0",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:                "ceph.with_tpm": "0"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            },
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "type": "block",
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:            "vg_name": "ceph_vg2"
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:        }
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]:    ]
Jan 29 12:37:42 np0005601226 affectionate_fermi[276551]: }
Jan 29 12:37:42 np0005601226 systemd[1]: libpod-35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18.scope: Deactivated successfully.
Jan 29 12:37:42 np0005601226 conmon[276551]: conmon 35c5530ef336a5746387 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18.scope/container/memory.events
Jan 29 12:37:42 np0005601226 podman[276534]: 2026-01-29 17:37:42.128135828 +0000 UTC m=+0.478094336 container died 35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 29 12:37:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 325 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 655 KiB/s wr, 70 op/s
Jan 29 12:37:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay-1f6d761ddf0ddeee69ce6f6de5f6a071d9d382dc8858a2139e220d63b73d5f5e-merged.mount: Deactivated successfully.
Jan 29 12:37:42 np0005601226 podman[276534]: 2026-01-29 17:37:42.181944768 +0000 UTC m=+0.531903286 container remove 35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:37:42 np0005601226 systemd[1]: libpod-conmon-35c5530ef336a5746387c2d8bb31ea0eda42d256f694cb74ca76c432b558ff18.scope: Deactivated successfully.
Jan 29 12:37:42 np0005601226 nova_compute[239456]: 2026-01-29 17:37:42.326 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:42 np0005601226 nova_compute[239456]: 2026-01-29 17:37:42.605 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:42 np0005601226 podman[276633]: 2026-01-29 17:37:42.660966808 +0000 UTC m=+0.062861436 container create 432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 29 12:37:42 np0005601226 systemd[1]: Started libpod-conmon-432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a.scope.
Jan 29 12:37:42 np0005601226 podman[276633]: 2026-01-29 17:37:42.624966042 +0000 UTC m=+0.026860700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:37:42 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:37:42 np0005601226 podman[276633]: 2026-01-29 17:37:42.739457689 +0000 UTC m=+0.141352387 container init 432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:37:42 np0005601226 podman[276633]: 2026-01-29 17:37:42.746581872 +0000 UTC m=+0.148476490 container start 432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:37:42 np0005601226 podman[276633]: 2026-01-29 17:37:42.749670186 +0000 UTC m=+0.151564844 container attach 432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 29 12:37:42 np0005601226 systemd[1]: libpod-432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a.scope: Deactivated successfully.
Jan 29 12:37:42 np0005601226 sad_chaplygin[276650]: 167 167
Jan 29 12:37:42 np0005601226 conmon[276650]: conmon 432a05653b3c5a08c074 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a.scope/container/memory.events
Jan 29 12:37:42 np0005601226 podman[276633]: 2026-01-29 17:37:42.75317437 +0000 UTC m=+0.155069028 container died 432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 29 12:37:42 np0005601226 systemd[1]: var-lib-containers-storage-overlay-2ba765e82d0b04d754c0a661d6ec5193c136a8683dcddcae337802fb48504fab-merged.mount: Deactivated successfully.
Jan 29 12:37:42 np0005601226 podman[276633]: 2026-01-29 17:37:42.798478071 +0000 UTC m=+0.200372719 container remove 432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_chaplygin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 29 12:37:42 np0005601226 systemd[1]: libpod-conmon-432a05653b3c5a08c074ab7aede87b4de8fac7012d9fc1c469fed3b45358b46a.scope: Deactivated successfully.
Jan 29 12:37:42 np0005601226 podman[276673]: 2026-01-29 17:37:42.96317477 +0000 UTC m=+0.056119974 container create f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 29 12:37:42 np0005601226 systemd[1]: Started libpod-conmon-f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c.scope.
Jan 29 12:37:43 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:37:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e269deb9886b8a45511da7501533f910025659d440b61dfb898835cde4bf660a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:43 np0005601226 podman[276673]: 2026-01-29 17:37:42.93774648 +0000 UTC m=+0.030691684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:37:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e269deb9886b8a45511da7501533f910025659d440b61dfb898835cde4bf660a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e269deb9886b8a45511da7501533f910025659d440b61dfb898835cde4bf660a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:43 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e269deb9886b8a45511da7501533f910025659d440b61dfb898835cde4bf660a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:37:43 np0005601226 podman[276673]: 2026-01-29 17:37:43.051447405 +0000 UTC m=+0.144392649 container init f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:37:43 np0005601226 podman[276673]: 2026-01-29 17:37:43.057628393 +0000 UTC m=+0.150573567 container start f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 29 12:37:43 np0005601226 podman[276673]: 2026-01-29 17:37:43.084979605 +0000 UTC m=+0.177924879 container attach f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:37:43 np0005601226 nova_compute[239456]: 2026-01-29 17:37:43.084 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:43 np0005601226 nova_compute[239456]: 2026-01-29 17:37:43.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:43 np0005601226 lvm[276767]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:37:43 np0005601226 lvm[276767]: VG ceph_vg0 finished
Jan 29 12:37:43 np0005601226 lvm[276769]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:37:43 np0005601226 lvm[276769]: VG ceph_vg1 finished
Jan 29 12:37:43 np0005601226 lvm[276771]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:37:43 np0005601226 lvm[276771]: VG ceph_vg2 finished
Jan 29 12:37:43 np0005601226 lvm[276772]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:37:43 np0005601226 lvm[276772]: VG ceph_vg0 finished
Jan 29 12:37:43 np0005601226 cranky_keller[276690]: {}
Jan 29 12:37:43 np0005601226 systemd[1]: libpod-f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c.scope: Deactivated successfully.
Jan 29 12:37:43 np0005601226 systemd[1]: libpod-f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c.scope: Consumed 1.003s CPU time.
Jan 29 12:37:43 np0005601226 podman[276673]: 2026-01-29 17:37:43.786013661 +0000 UTC m=+0.878958885 container died f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 29 12:37:43 np0005601226 systemd[1]: var-lib-containers-storage-overlay-e269deb9886b8a45511da7501533f910025659d440b61dfb898835cde4bf660a-merged.mount: Deactivated successfully.
Jan 29 12:37:43 np0005601226 podman[276673]: 2026-01-29 17:37:43.839769319 +0000 UTC m=+0.932714493 container remove f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 29 12:37:43 np0005601226 systemd[1]: libpod-conmon-f7aec7d9dcf918bebd1bdc8f6e63d979226845a818eb7febce3be9ab6f58943c.scope: Deactivated successfully.
Jan 29 12:37:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:37:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:37:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:37:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:37:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 348 MiB data, 641 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 29 12:37:44 np0005601226 nova_compute[239456]: 2026-01-29 17:37:44.615 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:37:44 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:37:45 np0005601226 nova_compute[239456]: 2026-01-29 17:37:45.505 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:45 np0005601226 nova_compute[239456]: 2026-01-29 17:37:45.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 350 MiB data, 644 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 29 12:37:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:47 np0005601226 nova_compute[239456]: 2026-01-29 17:37:47.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:47 np0005601226 nova_compute[239456]: 2026-01-29 17:37:47.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:37:47 np0005601226 nova_compute[239456]: 2026-01-29 17:37:47.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:37:48 np0005601226 nova_compute[239456]: 2026-01-29 17:37:48.086 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 350 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 29 12:37:48 np0005601226 nova_compute[239456]: 2026-01-29 17:37:48.251 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 29 12:37:48 np0005601226 nova_compute[239456]: 2026-01-29 17:37:48.252 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquired lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 29 12:37:48 np0005601226 nova_compute[239456]: 2026-01-29 17:37:48.252 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 29 12:37:48 np0005601226 nova_compute[239456]: 2026-01-29 17:37:48.253 239460 DEBUG nova.objects.instance [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:37:49 np0005601226 nova_compute[239456]: 2026-01-29 17:37:49.510 239460 DEBUG nova.network.neutron [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating instance_info_cache with network_info: [{"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:37:49 np0005601226 nova_compute[239456]: 2026-01-29 17:37:49.531 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Releasing lock "refresh_cache-56cf922f-31d1-4f48-8716-abdd2671978f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 29 12:37:49 np0005601226 nova_compute[239456]: 2026-01-29 17:37:49.532 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 29 12:37:49 np0005601226 nova_compute[239456]: 2026-01-29 17:37:49.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 350 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 29 12:37:50 np0005601226 nova_compute[239456]: 2026-01-29 17:37:50.551 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007616511358137032 of space, bias 1.0, pg target 0.22849534074411096 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002912962693616019 of space, bias 1.0, pg target 0.8738888080848057 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.772749653223973e-06 of space, bias 1.0, pg target 0.0014318248959671919 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669384742175927 of space, bias 1.0, pg target 0.2000815422652778 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4515169012893524e-06 of space, bias 4.0, pg target 0.0017418202815472229 quantized to 16 (current 16)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:37:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:37:51 np0005601226 nova_compute[239456]: 2026-01-29 17:37:51.840 239460 DEBUG oslo_concurrency.lockutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:51 np0005601226 nova_compute[239456]: 2026-01-29 17:37:51.840 239460 DEBUG oslo_concurrency.lockutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:51 np0005601226 nova_compute[239456]: 2026-01-29 17:37:51.858 239460 DEBUG nova.objects.instance [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:37:51 np0005601226 nova_compute[239456]: 2026-01-29 17:37:51.901 239460 DEBUG oslo_concurrency.lockutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.070 239460 DEBUG oslo_concurrency.lockutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.070 239460 DEBUG oslo_concurrency.lockutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.071 239460 INFO nova.compute.manager [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attaching volume e61c6d73-2ede-4f31-9ede-1a3152b961fb to /dev/vdb#033[00m
Jan 29 12:37:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 350 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.213 239460 DEBUG os_brick.utils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.215 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.225 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.226 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[bc29155c-ebec-4102-82c4-653d6457701b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.227 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.232 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.233 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[ac92191b-c05b-4477-9d33-2c52560f093e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.234 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.245 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.245 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[c380bcb7-30fc-46f2-b07a-313d206a2812]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.247 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[d41c193b-0138-44be-ba07-5bcbff200527]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.247 239460 DEBUG oslo_concurrency.processutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.274 239460 DEBUG oslo_concurrency.processutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.278 239460 DEBUG os_brick.initiator.connectors.lightos [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.278 239460 DEBUG os_brick.initiator.connectors.lightos [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.279 239460 DEBUG os_brick.initiator.connectors.lightos [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.280 239460 DEBUG os_brick.utils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:37:52 np0005601226 nova_compute[239456]: 2026-01-29 17:37:52.280 239460 DEBUG nova.virt.block_device [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating existing volume attachment record: 7d690c25-b91d-4427-a381-03989fc51ac2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:37:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:53 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:37:53 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241999399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.126 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.168 239460 DEBUG nova.objects.instance [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.198 239460 DEBUG nova.virt.libvirt.driver [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to attach volume e61c6d73-2ede-4f31-9ede-1a3152b961fb with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.204 239460 DEBUG nova.virt.libvirt.guest [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:37:53 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:37:53 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-e61c6d73-2ede-4f31-9ede-1a3152b961fb">
Jan 29 12:37:53 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:37:53 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:37:53 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:37:53 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:37:53 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:37:53 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:37:53 np0005601226 nova_compute[239456]:  <serial>e61c6d73-2ede-4f31-9ede-1a3152b961fb</serial>
Jan 29 12:37:53 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:37:53 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.325 239460 DEBUG nova.virt.libvirt.driver [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.326 239460 DEBUG nova.virt.libvirt.driver [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.326 239460 DEBUG nova.virt.libvirt.driver [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.326 239460 DEBUG nova.virt.libvirt.driver [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No VIF found with MAC fa:16:3e:e8:15:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:37:53 np0005601226 nova_compute[239456]: 2026-01-29 17:37:53.554 239460 DEBUG oslo_concurrency.lockutils [None req-d793fa01-cc28-44c3-948b-74431b040cf7 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:37:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 350 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Jan 29 12:37:55 np0005601226 nova_compute[239456]: 2026-01-29 17:37:55.588 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:55 np0005601226 nova_compute[239456]: 2026-01-29 17:37:55.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:37:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 350 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 44 KiB/s wr, 20 op/s
Jan 29 12:37:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e512 do_prune osdmap full prune enabled
Jan 29 12:37:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e513 e513: 3 total, 3 up, 3 in
Jan 29 12:37:56 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e513: 3 total, 3 up, 3 in
Jan 29 12:37:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e513 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:37:58 np0005601226 nova_compute[239456]: 2026-01-29 17:37:58.160 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:37:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 350 MiB data, 662 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 6.6 KiB/s wr, 14 op/s
Jan 29 12:37:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e513 do_prune osdmap full prune enabled
Jan 29 12:37:58 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e514 e514: 3 total, 3 up, 3 in
Jan 29 12:37:58 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e514: 3 total, 3 up, 3 in
Jan 29 12:38:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 352 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 128 KiB/s wr, 47 op/s
Jan 29 12:38:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e514 do_prune osdmap full prune enabled
Jan 29 12:38:00 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 e515: 3 total, 3 up, 3 in
Jan 29 12:38:00 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e515: 3 total, 3 up, 3 in
Jan 29 12:38:00 np0005601226 nova_compute[239456]: 2026-01-29 17:38:00.631 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.618 239460 DEBUG oslo_concurrency.lockutils [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.619 239460 DEBUG oslo_concurrency.lockutils [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.635 239460 INFO nova.compute.manager [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Detaching volume e61c6d73-2ede-4f31-9ede-1a3152b961fb#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.820 239460 INFO nova.virt.block_device [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to driver detach volume e61c6d73-2ede-4f31-9ede-1a3152b961fb from mountpoint /dev/vdb#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.831 239460 DEBUG nova.virt.libvirt.driver [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Attempting to detach device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.832 239460 DEBUG nova.virt.libvirt.guest [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-e61c6d73-2ede-4f31-9ede-1a3152b961fb">
Jan 29 12:38:01 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <serial>e61c6d73-2ede-4f31-9ede-1a3152b961fb</serial>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:01 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.848 239460 INFO nova.virt.libvirt.driver [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config.#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.849 239460 DEBUG nova.virt.libvirt.driver [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.850 239460 DEBUG nova.virt.libvirt.guest [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-e61c6d73-2ede-4f31-9ede-1a3152b961fb">
Jan 29 12:38:01 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <serial>e61c6d73-2ede-4f31-9ede-1a3152b961fb</serial>
Jan 29 12:38:01 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:01 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:01 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.965 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769708281.9644258, 56cf922f-31d1-4f48-8716-abdd2671978f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.966 239460 DEBUG nova.virt.libvirt.driver [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 56cf922f-31d1-4f48-8716-abdd2671978f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:38:01 np0005601226 nova_compute[239456]: 2026-01-29 17:38:01.970 239460 INFO nova.virt.libvirt.driver [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config.#033[00m
Jan 29 12:38:02 np0005601226 nova_compute[239456]: 2026-01-29 17:38:02.130 239460 DEBUG nova.objects.instance [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:02 np0005601226 nova_compute[239456]: 2026-01-29 17:38:02.167 239460 DEBUG oslo_concurrency.lockutils [None req-6043b8c9-bf3c-403a-aa89-c9371f5366ae a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 352 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 172 KiB/s wr, 57 op/s
Jan 29 12:38:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:03 np0005601226 nova_compute[239456]: 2026-01-29 17:38:03.164 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 167 KiB/s wr, 89 op/s
Jan 29 12:38:04 np0005601226 nova_compute[239456]: 2026-01-29 17:38:04.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:04 np0005601226 nova_compute[239456]: 2026-01-29 17:38:04.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 29 12:38:04 np0005601226 nova_compute[239456]: 2026-01-29 17:38:04.627 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 29 12:38:04 np0005601226 nova_compute[239456]: 2026-01-29 17:38:04.904 239460 DEBUG oslo_concurrency.lockutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:04 np0005601226 nova_compute[239456]: 2026-01-29 17:38:04.905 239460 DEBUG oslo_concurrency.lockutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:04 np0005601226 nova_compute[239456]: 2026-01-29 17:38:04.926 239460 DEBUG nova.objects.instance [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:04 np0005601226 nova_compute[239456]: 2026-01-29 17:38:04.972 239460 DEBUG oslo_concurrency.lockutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.186 239460 DEBUG oslo_concurrency.lockutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.186 239460 DEBUG oslo_concurrency.lockutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.187 239460 INFO nova.compute.manager [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attaching volume c89d4692-8be7-49b4-8090-93dd9887d679 to /dev/vdb#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.339 239460 DEBUG os_brick.utils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.340 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.349 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.349 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[58720605-8873-4fff-869f-fc5b8ce5c3fc]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.350 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.358 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.358 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[37f6a623-2b36-425d-af21-b633b2c9a89e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.359 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.366 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.366 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[a869c675-7e4f-4c7c-af8f-5afa4e467123]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.368 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[52f76832-8e3c-498c-92cd-658b75acd752]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.368 239460 DEBUG oslo_concurrency.processutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.397 239460 DEBUG oslo_concurrency.processutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.400 239460 DEBUG os_brick.initiator.connectors.lightos [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.401 239460 DEBUG os_brick.initiator.connectors.lightos [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.401 239460 DEBUG os_brick.initiator.connectors.lightos [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.402 239460 DEBUG os_brick.utils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.403 239460 DEBUG nova.virt.block_device [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating existing volume attachment record: 61b38ec5-4872-4628-9d05-8bcc78c16a8e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:38:05 np0005601226 nova_compute[239456]: 2026-01-29 17:38:05.680 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:06 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:38:06 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2441124935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:38:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 156 KiB/s wr, 74 op/s
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.317 239460 DEBUG nova.objects.instance [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.341 239460 DEBUG nova.virt.libvirt.driver [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to attach volume c89d4692-8be7-49b4-8090-93dd9887d679 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.344 239460 DEBUG nova.virt.libvirt.guest [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:38:06 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:06 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-c89d4692-8be7-49b4-8090-93dd9887d679">
Jan 29 12:38:06 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:06 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:06 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:38:06 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:38:06 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:38:06 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:06 np0005601226 nova_compute[239456]:  <serial>c89d4692-8be7-49b4-8090-93dd9887d679</serial>
Jan 29 12:38:06 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:06 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.540 239460 DEBUG nova.virt.libvirt.driver [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.541 239460 DEBUG nova.virt.libvirt.driver [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.543 239460 DEBUG nova.virt.libvirt.driver [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.544 239460 DEBUG nova.virt.libvirt.driver [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No VIF found with MAC fa:16:3e:e8:15:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:38:06 np0005601226 ovn_controller[145556]: 2026-01-29T17:38:06Z|00276|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Jan 29 12:38:06 np0005601226 nova_compute[239456]: 2026-01-29 17:38:06.742 239460 DEBUG oslo_concurrency.lockutils [None req-bc93ee5b-5ec4-4d38-8cba-3c8514d372b8 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:08 np0005601226 nova_compute[239456]: 2026-01-29 17:38:08.165 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 58 KiB/s wr, 57 op/s
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.297 239460 DEBUG oslo_concurrency.lockutils [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.297 239460 DEBUG oslo_concurrency.lockutils [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.318 239460 INFO nova.compute.manager [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Detaching volume c89d4692-8be7-49b4-8090-93dd9887d679#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.431 239460 INFO nova.virt.block_device [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to driver detach volume c89d4692-8be7-49b4-8090-93dd9887d679 from mountpoint /dev/vdb#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.444 239460 DEBUG nova.virt.libvirt.driver [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Attempting to detach device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.445 239460 DEBUG nova.virt.libvirt.guest [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-c89d4692-8be7-49b4-8090-93dd9887d679">
Jan 29 12:38:09 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <serial>c89d4692-8be7-49b4-8090-93dd9887d679</serial>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:09 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.457 239460 INFO nova.virt.libvirt.driver [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config.#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.458 239460 DEBUG nova.virt.libvirt.driver [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.459 239460 DEBUG nova.virt.libvirt.guest [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-c89d4692-8be7-49b4-8090-93dd9887d679">
Jan 29 12:38:09 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <serial>c89d4692-8be7-49b4-8090-93dd9887d679</serial>
Jan 29 12:38:09 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:09 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:09 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.581 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769708289.5813224, 56cf922f-31d1-4f48-8716-abdd2671978f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.584 239460 DEBUG nova.virt.libvirt.driver [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 56cf922f-31d1-4f48-8716-abdd2671978f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.588 239460 INFO nova.virt.libvirt.driver [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config.#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.603 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.767 239460 DEBUG nova.objects.instance [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:09 np0005601226 nova_compute[239456]: 2026-01-29 17:38:09.813 239460 DEBUG oslo_concurrency.lockutils [None req-8791722e-32e0-48e2-b092-7544beb12b20 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:09 np0005601226 podman[276869]: 2026-01-29 17:38:09.922152863 +0000 UTC m=+0.078273405 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 29 12:38:09 np0005601226 podman[276870]: 2026-01-29 17:38:09.953111273 +0000 UTC m=+0.109355709 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 29 12:38:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 109 KiB/s rd, 84 KiB/s wr, 47 op/s
Jan 29 12:38:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:38:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:38:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:38:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:38:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:38:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:38:10 np0005601226 nova_compute[239456]: 2026-01-29 17:38:10.682 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 71 KiB/s wr, 40 op/s
Jan 29 12:38:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:12 np0005601226 nova_compute[239456]: 2026-01-29 17:38:12.522 239460 DEBUG oslo_concurrency.lockutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:12 np0005601226 nova_compute[239456]: 2026-01-29 17:38:12.522 239460 DEBUG oslo_concurrency.lockutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:12 np0005601226 nova_compute[239456]: 2026-01-29 17:38:12.542 239460 DEBUG nova.objects.instance [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:12 np0005601226 nova_compute[239456]: 2026-01-29 17:38:12.584 239460 DEBUG oslo_concurrency.lockutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:12 np0005601226 nova_compute[239456]: 2026-01-29 17:38:12.856 239460 DEBUG oslo_concurrency.lockutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:12 np0005601226 nova_compute[239456]: 2026-01-29 17:38:12.856 239460 DEBUG oslo_concurrency.lockutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:12 np0005601226 nova_compute[239456]: 2026-01-29 17:38:12.857 239460 INFO nova.compute.manager [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attaching volume 7757cfbb-1184-4782-9085-a96fa0bc8359 to /dev/vdb#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.017 239460 DEBUG os_brick.utils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.018 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.030 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.030 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf9c380-d13f-470d-8b9b-cddc1b260c1b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.032 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.039 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.040 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9c4f45-5071-4467-b48d-6a7b4ee76eeb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.041 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.051 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.051 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9504ab-7064-4fa3-80ec-d1015356068f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.053 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[eeabe2ce-40ce-4f91-b740-8899d79a6d9d]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.053 239460 DEBUG oslo_concurrency.processutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.076 239460 DEBUG oslo_concurrency.processutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.079 239460 DEBUG os_brick.initiator.connectors.lightos [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.080 239460 DEBUG os_brick.initiator.connectors.lightos [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.080 239460 DEBUG os_brick.initiator.connectors.lightos [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.080 239460 DEBUG os_brick.utils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.081 239460 DEBUG nova.virt.block_device [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating existing volume attachment record: 93868c1b-146b-4b9a-973b-59f45e7cbdd9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.197 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:13 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:38:13 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/8586551' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.948 239460 DEBUG nova.objects.instance [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.979 239460 DEBUG nova.virt.libvirt.driver [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to attach volume 7757cfbb-1184-4782-9085-a96fa0bc8359 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:38:13 np0005601226 nova_compute[239456]: 2026-01-29 17:38:13.982 239460 DEBUG nova.virt.libvirt.guest [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:38:13 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:13 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-7757cfbb-1184-4782-9085-a96fa0bc8359">
Jan 29 12:38:13 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:13 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:13 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:38:13 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:38:13 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:38:13 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:13 np0005601226 nova_compute[239456]:  <serial>7757cfbb-1184-4782-9085-a96fa0bc8359</serial>
Jan 29 12:38:13 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:13 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:38:14 np0005601226 nova_compute[239456]: 2026-01-29 17:38:14.103 239460 DEBUG nova.virt.libvirt.driver [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:14 np0005601226 nova_compute[239456]: 2026-01-29 17:38:14.104 239460 DEBUG nova.virt.libvirt.driver [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:14 np0005601226 nova_compute[239456]: 2026-01-29 17:38:14.104 239460 DEBUG nova.virt.libvirt.driver [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:14 np0005601226 nova_compute[239456]: 2026-01-29 17:38:14.104 239460 DEBUG nova.virt.libvirt.driver [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No VIF found with MAC fa:16:3e:e8:15:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:38:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 353 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 85 KiB/s wr, 62 op/s
Jan 29 12:38:14 np0005601226 nova_compute[239456]: 2026-01-29 17:38:14.303 239460 DEBUG oslo_concurrency.lockutils [None req-9cd15548-3357-4ab9-be17-f09c47d5821c a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:15 np0005601226 nova_compute[239456]: 2026-01-29 17:38:15.723 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 353 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 63 KiB/s wr, 33 op/s
Jan 29 12:38:16 np0005601226 nova_compute[239456]: 2026-01-29 17:38:16.877 239460 DEBUG oslo_concurrency.lockutils [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:16 np0005601226 nova_compute[239456]: 2026-01-29 17:38:16.878 239460 DEBUG oslo_concurrency.lockutils [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:16 np0005601226 nova_compute[239456]: 2026-01-29 17:38:16.896 239460 INFO nova.compute.manager [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Detaching volume 7757cfbb-1184-4782-9085-a96fa0bc8359#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.012 239460 INFO nova.virt.block_device [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to driver detach volume 7757cfbb-1184-4782-9085-a96fa0bc8359 from mountpoint /dev/vdb#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.020 239460 DEBUG nova.virt.libvirt.driver [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Attempting to detach device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.021 239460 DEBUG nova.virt.libvirt.guest [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-7757cfbb-1184-4782-9085-a96fa0bc8359">
Jan 29 12:38:17 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <serial>7757cfbb-1184-4782-9085-a96fa0bc8359</serial>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:17 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.027 239460 INFO nova.virt.libvirt.driver [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config.#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.028 239460 DEBUG nova.virt.libvirt.driver [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.028 239460 DEBUG nova.virt.libvirt.guest [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-7757cfbb-1184-4782-9085-a96fa0bc8359">
Jan 29 12:38:17 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <serial>7757cfbb-1184-4782-9085-a96fa0bc8359</serial>
Jan 29 12:38:17 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:17 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:17 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.129 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769708297.1290174, 56cf922f-31d1-4f48-8716-abdd2671978f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.130 239460 DEBUG nova.virt.libvirt.driver [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 56cf922f-31d1-4f48-8716-abdd2671978f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.132 239460 INFO nova.virt.libvirt.driver [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config.#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.260 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.311 239460 DEBUG nova.objects.instance [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:17 np0005601226 nova_compute[239456]: 2026-01-29 17:38:17.364 239460 DEBUG oslo_concurrency.lockutils [None req-b8ca38e8-15da-45ad-acdf-8a234b5119f9 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:18 np0005601226 nova_compute[239456]: 2026-01-29 17:38:18.198 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 353 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 65 KiB/s wr, 35 op/s
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.078 239460 DEBUG oslo_concurrency.lockutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.079 239460 DEBUG oslo_concurrency.lockutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.099 239460 DEBUG nova.objects.instance [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.159 239460 DEBUG oslo_concurrency.lockutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 354 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 157 KiB/s rd, 119 KiB/s wr, 53 op/s
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.417 239460 DEBUG oslo_concurrency.lockutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.417 239460 DEBUG oslo_concurrency.lockutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.417 239460 INFO nova.compute.manager [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attaching volume 58e34862-f844-4551-8902-0d69ad9b8607 to /dev/vdb#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.536 239460 DEBUG os_brick.utils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.537 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.544 249612 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.544 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce5a043-76ba-49b2-a33d-d353ebd1584a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.545 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.548 249612 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.549 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[564719bd-a995-48d4-a5a6-d109ede5e90b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:29fa340538f', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.549 249612 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.557 249612 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.557 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[b62f07f7-9404-498e-995a-fdd99eada230]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.557 249612 DEBUG oslo.privsep.daemon [-] privsep: reply[950ba8ab-a068-4a0a-9829-ec547dfb6b0f]: (4, '3d58286e-1b14-486e-8cad-0bdb2d2969c4') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.558 239460 DEBUG oslo_concurrency.processutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.573 239460 DEBUG oslo_concurrency.processutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "nvme version" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.583 239460 DEBUG os_brick.initiator.connectors.lightos [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.583 239460 DEBUG os_brick.initiator.connectors.lightos [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.583 239460 DEBUG os_brick.initiator.connectors.lightos [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.584 239460 DEBUG os_brick.utils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:29fa340538f', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '3d58286e-1b14-486e-8cad-0bdb2d2969c4', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.584 239460 DEBUG nova.virt.block_device [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating existing volume attachment record: e9f10681-4e94-4a83-b80f-4d00a2c95f74 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 29 12:38:20 np0005601226 nova_compute[239456]: 2026-01-29 17:38:20.726 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 29 12:38:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1030078190' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.381 239460 DEBUG nova.objects.instance [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.405 239460 DEBUG nova.virt.libvirt.driver [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to attach volume 58e34862-f844-4551-8902-0d69ad9b8607 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.408 239460 DEBUG nova.virt.libvirt.guest [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] attach device xml: <disk type="network" device="disk">
Jan 29 12:38:21 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:21 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-58e34862-f844-4551-8902-0d69ad9b8607">
Jan 29 12:38:21 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:21 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:21 np0005601226 nova_compute[239456]:  <auth username="openstack">
Jan 29 12:38:21 np0005601226 nova_compute[239456]:    <secret type="ceph" uuid="cc5c72e3-31e0-58b9-8731-456117d38f4a"/>
Jan 29 12:38:21 np0005601226 nova_compute[239456]:  </auth>
Jan 29 12:38:21 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:21 np0005601226 nova_compute[239456]:  <serial>58e34862-f844-4551-8902-0d69ad9b8607</serial>
Jan 29 12:38:21 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:21 np0005601226 nova_compute[239456]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.527 239460 DEBUG nova.virt.libvirt.driver [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.528 239460 DEBUG nova.virt.libvirt.driver [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.528 239460 DEBUG nova.virt.libvirt.driver [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.528 239460 DEBUG nova.virt.libvirt.driver [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] No VIF found with MAC fa:16:3e:e8:15:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 29 12:38:21 np0005601226 nova_compute[239456]: 2026-01-29 17:38:21.723 239460 DEBUG oslo_concurrency.lockutils [None req-5e305cc0-380b-4a81-8978-e51de8257c06 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 354 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 73 KiB/s wr, 44 op/s
Jan 29 12:38:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:23 np0005601226 nova_compute[239456]: 2026-01-29 17:38:23.244 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 354 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 161 KiB/s rd, 128 KiB/s wr, 59 op/s
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.285 239460 DEBUG oslo_concurrency.lockutils [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.286 239460 DEBUG oslo_concurrency.lockutils [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.301 239460 INFO nova.compute.manager [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Detaching volume 58e34862-f844-4551-8902-0d69ad9b8607#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.463 239460 INFO nova.virt.block_device [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Attempting to driver detach volume 58e34862-f844-4551-8902-0d69ad9b8607 from mountpoint /dev/vdb#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.473 239460 DEBUG nova.virt.libvirt.driver [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Attempting to detach device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.473 239460 DEBUG nova.virt.libvirt.guest [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-58e34862-f844-4551-8902-0d69ad9b8607">
Jan 29 12:38:24 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <serial>58e34862-f844-4551-8902-0d69ad9b8607</serial>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:24 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.485 239460 INFO nova.virt.libvirt.driver [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the persistent domain config.#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.486 239460 DEBUG nova.virt.libvirt.driver [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.486 239460 DEBUG nova.virt.libvirt.guest [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] detach device xml: <disk type="network" device="disk">
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <source protocol="rbd" name="volumes/volume-58e34862-f844-4551-8902-0d69ad9b8607">
Jan 29 12:38:24 np0005601226 nova_compute[239456]:    <host name="192.168.122.100" port="6789"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  </source>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <target dev="vdb" bus="virtio"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <serial>58e34862-f844-4551-8902-0d69ad9b8607</serial>
Jan 29 12:38:24 np0005601226 nova_compute[239456]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 29 12:38:24 np0005601226 nova_compute[239456]: </disk>
Jan 29 12:38:24 np0005601226 nova_compute[239456]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.611 239460 DEBUG nova.virt.libvirt.driver [None req-a9b4baaa-45b5-4555-97f0-759f1d421ab0 - - - - - -] Received event <DeviceRemovedEvent: 1769708304.6104777, 56cf922f-31d1-4f48-8716-abdd2671978f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.614 239460 DEBUG nova.virt.libvirt.driver [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 56cf922f-31d1-4f48-8716-abdd2671978f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.616 239460 INFO nova.virt.libvirt.driver [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully detached device vdb from instance 56cf922f-31d1-4f48-8716-abdd2671978f from the live domain config.#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.799 239460 DEBUG nova.objects.instance [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'flavor' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:24 np0005601226 nova_compute[239456]: 2026-01-29 17:38:24.853 239460 DEBUG oslo_concurrency.lockutils [None req-ec964f00-61e0-45d6-94aa-b1325a265123 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:25 np0005601226 nova_compute[239456]: 2026-01-29 17:38:25.729 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 354 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 145 KiB/s rd, 113 KiB/s wr, 37 op/s
Jan 29 12:38:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:38:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1462143310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:38:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:38:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1462143310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:38:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:38:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656530499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:38:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:38:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656530499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:38:28 np0005601226 nova_compute[239456]: 2026-01-29 17:38:28.279 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 354 MiB data, 683 MiB used, 59 GiB / 60 GiB avail; 153 KiB/s rd, 115 KiB/s wr, 47 op/s
Jan 29 12:38:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:38:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3886834453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:38:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:38:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3886834453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:38:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 353 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 116 KiB/s wr, 89 op/s
Jan 29 12:38:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:38:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3599083874' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:38:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:38:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3599083874' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:38:30 np0005601226 nova_compute[239456]: 2026-01-29 17:38:30.768 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e515 do_prune osdmap full prune enabled
Jan 29 12:38:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e516 e516: 3 total, 3 up, 3 in
Jan 29 12:38:30 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e516: 3 total, 3 up, 3 in
Jan 29 12:38:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e516 do_prune osdmap full prune enabled
Jan 29 12:38:31 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e517 e517: 3 total, 3 up, 3 in
Jan 29 12:38:31 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e517: 3 total, 3 up, 3 in
Jan 29 12:38:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 353 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Jan 29 12:38:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e517 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:33 np0005601226 nova_compute[239456]: 2026-01-29 17:38:33.279 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e517 do_prune osdmap full prune enabled
Jan 29 12:38:33 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e518 e518: 3 total, 3 up, 3 in
Jan 29 12:38:33 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e518: 3 total, 3 up, 3 in
Jan 29 12:38:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 9.0 KiB/s wr, 160 op/s
Jan 29 12:38:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:38:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1017635844' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:38:35 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:38:35 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1017635844' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:38:35 np0005601226 nova_compute[239456]: 2026-01-29 17:38:35.771 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.027598) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708316027682, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2268, "num_deletes": 260, "total_data_size": 3545983, "memory_usage": 3600592, "flush_reason": "Manual Compaction"}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708316125257, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3463220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36842, "largest_seqno": 39109, "table_properties": {"data_size": 3452525, "index_size": 6933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22049, "raw_average_key_size": 20, "raw_value_size": 3431236, "raw_average_value_size": 3264, "num_data_blocks": 302, "num_entries": 1051, "num_filter_entries": 1051, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769708131, "oldest_key_time": 1769708131, "file_creation_time": 1769708316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 97722 microseconds, and 8272 cpu microseconds.
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.125315) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3463220 bytes OK
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.125340) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.127961) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.127984) EVENT_LOG_v1 {"time_micros": 1769708316127977, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.128008) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3536315, prev total WAL file size 3536315, number of live WAL files 2.
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.129054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3382KB)], [77(10MB)]
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708316129102, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14536958, "oldest_snapshot_seqno": -1}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7252 keys, 12812243 bytes, temperature: kUnknown
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708316240717, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12812243, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12755671, "index_size": 37324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 182890, "raw_average_key_size": 25, "raw_value_size": 12617586, "raw_average_value_size": 1739, "num_data_blocks": 1487, "num_entries": 7252, "num_filter_entries": 7252, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769705351, "oldest_key_time": 0, "file_creation_time": 1769708316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "affa2982-d59d-4189-b5dd-817a80fada55", "db_session_id": "3LVBT2JQJ5HZ0LRVKGW6", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.241175) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12812243 bytes
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.275689) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.9 rd, 114.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 7783, records dropped: 531 output_compression: NoCompression
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.275726) EVENT_LOG_v1 {"time_micros": 1769708316275710, "job": 44, "event": "compaction_finished", "compaction_time_micros": 111868, "compaction_time_cpu_micros": 37916, "output_level": 6, "num_output_files": 1, "total_output_size": 12812243, "num_input_records": 7783, "num_output_records": 7252, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708316276390, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769708316278055, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.128921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.278168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.278176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.278181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.278185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:38:36 np0005601226 ceph-mon[75233]: rocksdb: (Original Log Time 2026/01/29-17:38:36.278189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 29 12:38:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 353 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 4.0 KiB/s wr, 70 op/s
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.456 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.456 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.457 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.457 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.457 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.459 239460 INFO nova.compute.manager [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Terminating instance#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.461 239460 DEBUG nova.compute.manager [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.623 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.624 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:38:36 np0005601226 kernel: tap4e6145b0-82 (unregistering): left promiscuous mode
Jan 29 12:38:36 np0005601226 NetworkManager[49020]: <info>  [1769708316.6365] device (tap4e6145b0-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 29 12:38:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:38:36Z|00277|binding|INFO|Releasing lport 4e6145b0-826c-49b0-8b2a-28d655d14899 from this chassis (sb_readonly=0)
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.647 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:38:36Z|00278|binding|INFO|Setting lport 4e6145b0-826c-49b0-8b2a-28d655d14899 down in Southbound
Jan 29 12:38:36 np0005601226 ovn_controller[145556]: 2026-01-29T17:38:36Z|00279|binding|INFO|Removing iface tap4e6145b0-82 ovn-installed in OVS
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.650 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:36.658 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:15:aa 10.100.0.6'], port_security=['fa:16:3e:e8:15:aa 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56cf922f-31d1-4f48-8716-abdd2671978f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '33d35fb946054d9db9235dbdd0d016df', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7bca6414-cee1-409a-86e7-358a99d3081b 8e0ce9cf-0c46-4c00-a275-5a6d2fadcaed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41b160a0-bb2b-496f-b795-108b47495676, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>], logical_port=4e6145b0-826c-49b0-8b2a-28d655d14899) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f39551fbb80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:38:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:36.662 155625 INFO neutron.agent.ovn.metadata.agent [-] Port 4e6145b0-826c-49b0-8b2a-28d655d14899 in datapath 35a25c0c-d0e7-4163-9f2f-f825549dd56b unbound from our chassis#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.663 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:36.667 155625 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35a25c0c-d0e7-4163-9f2f-f825549dd56b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 29 12:38:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:36.668 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ccd5fe-71e0-4a28-92ef-9b2e579355d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:36 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:36.669 155625 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b namespace which is not needed anymore#033[00m
Jan 29 12:38:36 np0005601226 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 29 12:38:36 np0005601226 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.228s CPU time.
Jan 29 12:38:36 np0005601226 systemd-machined[207561]: Machine qemu-29-instance-0000001d terminated.
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.904 239460 INFO nova.virt.libvirt.driver [-] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Instance destroyed successfully.#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.905 239460 DEBUG nova.objects.instance [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lazy-loading 'resources' on Instance uuid 56cf922f-31d1-4f48-8716-abdd2671978f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.918 239460 DEBUG nova.virt.libvirt.vif [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-29T17:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-1005365471',display_name='tempest-SnapshotDataIntegrityTests-server-1005365471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-1005365471',id=29,image_ref='71879218-5462-43bb-aba6-6319695b24fd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgMFMfIktUCCKxYvS5fnRhCqfW6HEpOoqw9YPS+GQOTbjTJO0kG7z43BrWxUwymnJBw2tIDGs6YXdt13jdNV8JUGkOTcJ0PN1w+6Dxdc2BghZn+xW+KepwYNzkwsLtcUw==',key_name='tempest-keypair-1811024843',keypairs=<?>,launch_index=0,launched_at=2026-01-29T17:37:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='33d35fb946054d9db9235dbdd0d016df',ramdisk_id='',reservation_id='r-e8092dzq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='71879218-5462-43bb-aba6-6319695b24fd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-564071566',owner_user_name='tempest-SnapshotDataIntegrityTests-564071566-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-29T17:37:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a90a68eb18ea403bba234ab459af3366',uuid=56cf922f-31d1-4f48-8716-abdd2671978f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.919 239460 DEBUG nova.network.os_vif_util [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Converting VIF {"id": "4e6145b0-826c-49b0-8b2a-28d655d14899", "address": "fa:16:3e:e8:15:aa", "network": {"id": "35a25c0c-d0e7-4163-9f2f-f825549dd56b", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-2027667864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "33d35fb946054d9db9235dbdd0d016df", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4e6145b0-82", "ovs_interfaceid": "4e6145b0-826c-49b0-8b2a-28d655d14899", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.920 239460 DEBUG nova.network.os_vif_util [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:15:aa,bridge_name='br-int',has_traffic_filtering=True,id=4e6145b0-826c-49b0-8b2a-28d655d14899,network=Network(35a25c0c-d0e7-4163-9f2f-f825549dd56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e6145b0-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.921 239460 DEBUG os_vif [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:15:aa,bridge_name='br-int',has_traffic_filtering=True,id=4e6145b0-826c-49b0-8b2a-28d655d14899,network=Network(35a25c0c-d0e7-4163-9f2f-f825549dd56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e6145b0-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.923 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.923 239460 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6145b0-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.969 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.971 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:36 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [NOTICE]   (276100) : haproxy version is 2.8.14-c23fe91
Jan 29 12:38:36 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [NOTICE]   (276100) : path to executable is /usr/sbin/haproxy
Jan 29 12:38:36 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [WARNING]  (276100) : Exiting Master process...
Jan 29 12:38:36 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [WARNING]  (276100) : Exiting Master process...
Jan 29 12:38:36 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [ALERT]    (276100) : Current worker (276102) exited with code 143 (Terminated)
Jan 29 12:38:36 np0005601226 neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b[276096]: [WARNING]  (276100) : All workers exited. Exiting... (0)
Jan 29 12:38:36 np0005601226 nova_compute[239456]: 2026-01-29 17:38:36.977 239460 INFO os_vif [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:15:aa,bridge_name='br-int',has_traffic_filtering=True,id=4e6145b0-826c-49b0-8b2a-28d655d14899,network=Network(35a25c0c-d0e7-4163-9f2f-f825549dd56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4e6145b0-82')#033[00m
Jan 29 12:38:36 np0005601226 systemd[1]: libpod-f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720.scope: Deactivated successfully.
Jan 29 12:38:36 np0005601226 podman[276998]: 2026-01-29 17:38:36.985310946 +0000 UTC m=+0.235569045 container died f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:38:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720-userdata-shm.mount: Deactivated successfully.
Jan 29 12:38:37 np0005601226 systemd[1]: var-lib-containers-storage-overlay-025b717dad1888cbc207048d42fce5801a03eae296b1e4b533d98e0d030fcfe8-merged.mount: Deactivated successfully.
Jan 29 12:38:37 np0005601226 podman[276998]: 2026-01-29 17:38:37.034806608 +0000 UTC m=+0.285064677 container cleanup f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:38:37 np0005601226 systemd[1]: libpod-conmon-f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720.scope: Deactivated successfully.
Jan 29 12:38:37 np0005601226 podman[277053]: 2026-01-29 17:38:37.127588256 +0000 UTC m=+0.067092682 container remove f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.135 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[29e4070b-27c4-4bf6-926e-ee6a619376db]: (4, ('Thu Jan 29 05:38:36 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b (f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720)\nf51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720\nThu Jan 29 05:38:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b (f51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720)\nf51cb92f55c4e9ff66a264937d81da593e1aa3e81c47b0f0afdbad38b341d720\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.139 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6cffa9-0788-439a-a007-7e3fffee222f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.140 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a25c0c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.143 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:37 np0005601226 kernel: tap35a25c0c-d0: left promiscuous mode
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.154 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.160 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[249fc7e9-000d-49b8-8a49-a49aa51a5900]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.174 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[353373a3-10e1-48f9-a4aa-70ef315f5c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.176 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbd6dd8-d38f-43fe-95c8-7db8c8e85b29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.192 246354 DEBUG oslo.privsep.daemon [-] privsep: reply[09ed60df-fbbb-43fe-991a-2499e0930012]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560131, 'reachable_time': 38433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277073, 'error': None, 'target': 'ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.195 156164 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35a25c0c-d0e7-4163-9f2f-f825549dd56b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.196 156164 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d3c39d-0564-4be6-a367-45f16c524967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 29 12:38:37 np0005601226 systemd[1]: run-netns-ovnmeta\x2d35a25c0c\x2dd0e7\x2d4163\x2d9f2f\x2df825549dd56b.mount: Deactivated successfully.
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.330 239460 INFO nova.virt.libvirt.driver [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Deleting instance files /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f_del#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.331 239460 INFO nova.virt.libvirt.driver [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Deletion of /var/lib/nova/instances/56cf922f-31d1-4f48-8716-abdd2671978f_del complete#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.383 239460 INFO nova.compute.manager [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.384 239460 DEBUG oslo.service.loopingcall [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.384 239460 DEBUG nova.compute.manager [-] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.384 239460 DEBUG nova.network.neutron [-] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 29 12:38:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e518 do_prune osdmap full prune enabled
Jan 29 12:38:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e519 e519: 3 total, 3 up, 3 in
Jan 29 12:38:37 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e519: 3 total, 3 up, 3 in
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.510 239460 DEBUG nova.compute.manager [req-9487fca2-2f9d-449c-87ad-c5c64a895a9e req-9be0b873-628c-4287-bd41-59428e3d87a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-vif-unplugged-4e6145b0-826c-49b0-8b2a-28d655d14899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.510 239460 DEBUG oslo_concurrency.lockutils [req-9487fca2-2f9d-449c-87ad-c5c64a895a9e req-9be0b873-628c-4287-bd41-59428e3d87a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.511 239460 DEBUG oslo_concurrency.lockutils [req-9487fca2-2f9d-449c-87ad-c5c64a895a9e req-9be0b873-628c-4287-bd41-59428e3d87a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.511 239460 DEBUG oslo_concurrency.lockutils [req-9487fca2-2f9d-449c-87ad-c5c64a895a9e req-9be0b873-628c-4287-bd41-59428e3d87a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.511 239460 DEBUG nova.compute.manager [req-9487fca2-2f9d-449c-87ad-c5c64a895a9e req-9be0b873-628c-4287-bd41-59428e3d87a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] No waiting events found dispatching network-vif-unplugged-4e6145b0-826c-49b0-8b2a-28d655d14899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.512 239460 DEBUG nova.compute.manager [req-9487fca2-2f9d-449c-87ad-c5c64a895a9e req-9be0b873-628c-4287-bd41-59428e3d87a8 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-vif-unplugged-4e6145b0-826c-49b0-8b2a-28d655d14899 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.649 155625 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '7a:bb:ca', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ee:2a:91:86:08:da'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 29 12:38:37 np0005601226 nova_compute[239456]: 2026-01-29 17:38:37.650 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:37 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:37.650 155625 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 29 12:38:38 np0005601226 nova_compute[239456]: 2026-01-29 17:38:38.282 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 323 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 5.1 KiB/s wr, 94 op/s
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.248 239460 DEBUG nova.network.neutron [-] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.271 239460 INFO nova.compute.manager [-] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Took 1.89 seconds to deallocate network for instance.#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.334 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.334 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.381 239460 DEBUG nova.compute.manager [req-6cc4d026-17c2-4d0f-81d1-07ba3cef7aa4 req-8a8a8a64-c483-4bba-a7b0-be32e1856cf6 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-vif-deleted-4e6145b0-826c-49b0-8b2a-28d655d14899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.505 239460 DEBUG oslo_concurrency.processutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.612 239460 DEBUG nova.compute.manager [req-59648ad6-4917-44d0-bd1a-114e9d9152a4 req-d9d40477-387a-4790-8bf6-daa7b1658939 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received event network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.613 239460 DEBUG oslo_concurrency.lockutils [req-59648ad6-4917-44d0-bd1a-114e9d9152a4 req-d9d40477-387a-4790-8bf6-daa7b1658939 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Acquiring lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.613 239460 DEBUG oslo_concurrency.lockutils [req-59648ad6-4917-44d0-bd1a-114e9d9152a4 req-d9d40477-387a-4790-8bf6-daa7b1658939 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.614 239460 DEBUG oslo_concurrency.lockutils [req-59648ad6-4917-44d0-bd1a-114e9d9152a4 req-d9d40477-387a-4790-8bf6-daa7b1658939 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.614 239460 DEBUG nova.compute.manager [req-59648ad6-4917-44d0-bd1a-114e9d9152a4 req-d9d40477-387a-4790-8bf6-daa7b1658939 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] No waiting events found dispatching network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.615 239460 WARNING nova.compute.manager [req-59648ad6-4917-44d0-bd1a-114e9d9152a4 req-d9d40477-387a-4790-8bf6-daa7b1658939 9bdc23332c934342b6fa4b88dd284ff3 f98f0e230844402bbff2d83d2cf3e2b4 - - default default] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Received unexpected event network-vif-plugged-4e6145b0-826c-49b0-8b2a-28d655d14899 for instance with vm_state deleted and task_state None.#033[00m
Jan 29 12:38:39 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:38:39 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248924993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.982 239460 DEBUG oslo_concurrency.processutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:39 np0005601226 nova_compute[239456]: 2026-01-29 17:38:39.988 239460 DEBUG nova.compute.provider_tree [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:38:40 np0005601226 nova_compute[239456]: 2026-01-29 17:38:40.008 239460 DEBUG nova.scheduler.client.report [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:38:40 np0005601226 nova_compute[239456]: 2026-01-29 17:38:40.040 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:40 np0005601226 nova_compute[239456]: 2026-01-29 17:38:40.122 239460 INFO nova.scheduler.client.report [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Deleted allocations for instance 56cf922f-31d1-4f48-8716-abdd2671978f#033[00m
Jan 29 12:38:40 np0005601226 nova_compute[239456]: 2026-01-29 17:38:40.208 239460 DEBUG oslo_concurrency.lockutils [None req-49fae32b-e356-4831-80e9-53df1694d381 a90a68eb18ea403bba234ab459af3366 33d35fb946054d9db9235dbdd0d016df - - default default] Lock "56cf922f-31d1-4f48-8716-abdd2671978f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 271 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 5.7 KiB/s wr, 118 op/s
Jan 29 12:38:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:40.302 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:40.302 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:40.303 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:38:40
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:38:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:38:40 np0005601226 podman[277097]: 2026-01-29 17:38:40.906238033 +0000 UTC m=+0.065959191 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 29 12:38:40 np0005601226 podman[277098]: 2026-01-29 17:38:40.944302115 +0000 UTC m=+0.100927489 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Jan 29 12:38:41 np0005601226 nova_compute[239456]: 2026-01-29 17:38:41.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:41 np0005601226 nova_compute[239456]: 2026-01-29 17:38:41.646 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:41 np0005601226 nova_compute[239456]: 2026-01-29 17:38:41.646 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:41 np0005601226 nova_compute[239456]: 2026-01-29 17:38:41.646 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:41 np0005601226 nova_compute[239456]: 2026-01-29 17:38:41.647 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:38:41 np0005601226 nova_compute[239456]: 2026-01-29 17:38:41.647 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.019 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534283950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.180 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 271 MiB data, 646 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 4.8 KiB/s wr, 102 op/s
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.402 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.404 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4290MB free_disk=59.98814365174621GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.404 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.405 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e519 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e519 do_prune osdmap full prune enabled
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 e520: 3 total, 3 up, 3 in
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: log_channel(cluster) log [DBG] : osdmap e520: 3 total, 3 up, 3 in
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.473 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.474 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:38:42 np0005601226 nova_compute[239456]: 2026-01-29 17:38:42.492 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:38:42 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1488097955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:38:43 np0005601226 nova_compute[239456]: 2026-01-29 17:38:43.010 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:38:43 np0005601226 nova_compute[239456]: 2026-01-29 17:38:43.015 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:38:43 np0005601226 nova_compute[239456]: 2026-01-29 17:38:43.031 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:38:43 np0005601226 nova_compute[239456]: 2026-01-29 17:38:43.054 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:38:43 np0005601226 nova_compute[239456]: 2026-01-29 17:38:43.054 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:38:43 np0005601226 nova_compute[239456]: 2026-01-29 17:38:43.284 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:44 np0005601226 nova_compute[239456]: 2026-01-29 17:38:44.055 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:44 np0005601226 nova_compute[239456]: 2026-01-29 17:38:44.055 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 271 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.7 KiB/s wr, 65 op/s
Jan 29 12:38:44 np0005601226 nova_compute[239456]: 2026-01-29 17:38:44.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:44 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:38:44.653 155625 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ea6bcc65-2563-4fe6-9039-bca7261f4cf7, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:38:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:38:45 np0005601226 podman[277332]: 2026-01-29 17:38:45.112864472 +0000 UTC m=+0.050153381 container create a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dubinsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:38:45 np0005601226 systemd[1]: Started libpod-conmon-a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe.scope.
Jan 29 12:38:45 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:38:45 np0005601226 podman[277332]: 2026-01-29 17:38:45.08547384 +0000 UTC m=+0.022762789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:38:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:38:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:38:45 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:38:45 np0005601226 podman[277332]: 2026-01-29 17:38:45.196919214 +0000 UTC m=+0.134208203 container init a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dubinsky, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:38:45 np0005601226 podman[277332]: 2026-01-29 17:38:45.205854697 +0000 UTC m=+0.143143636 container start a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:38:45 np0005601226 podman[277332]: 2026-01-29 17:38:45.209949168 +0000 UTC m=+0.147238157 container attach a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 29 12:38:45 np0005601226 gallant_dubinsky[277348]: 167 167
Jan 29 12:38:45 np0005601226 systemd[1]: libpod-a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe.scope: Deactivated successfully.
Jan 29 12:38:45 np0005601226 podman[277332]: 2026-01-29 17:38:45.211742916 +0000 UTC m=+0.149031845 container died a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 29 12:38:45 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f957d5adcb891e25f1238b39a936f4dc9e2e46521b34544d21f810a15a84ae09-merged.mount: Deactivated successfully.
Jan 29 12:38:45 np0005601226 podman[277332]: 2026-01-29 17:38:45.254007663 +0000 UTC m=+0.191296592 container remove a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_dubinsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 29 12:38:45 np0005601226 systemd[1]: libpod-conmon-a7cef0fdc5d26c604720362461183bb3fcb45cba8fbb2d5b90fb3d65ace437fe.scope: Deactivated successfully.
Jan 29 12:38:45 np0005601226 podman[277371]: 2026-01-29 17:38:45.43960991 +0000 UTC m=+0.055975220 container create 9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 29 12:38:45 np0005601226 systemd[1]: Started libpod-conmon-9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03.scope.
Jan 29 12:38:45 np0005601226 podman[277371]: 2026-01-29 17:38:45.416264427 +0000 UTC m=+0.032629797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:38:45 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:38:45 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9f4bfc6fa153e2c249690ae4d39e5922f0f61fef1d90b25aebfab277ed250e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:45 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9f4bfc6fa153e2c249690ae4d39e5922f0f61fef1d90b25aebfab277ed250e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:45 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9f4bfc6fa153e2c249690ae4d39e5922f0f61fef1d90b25aebfab277ed250e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:45 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9f4bfc6fa153e2c249690ae4d39e5922f0f61fef1d90b25aebfab277ed250e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:45 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9f4bfc6fa153e2c249690ae4d39e5922f0f61fef1d90b25aebfab277ed250e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:45 np0005601226 podman[277371]: 2026-01-29 17:38:45.542101952 +0000 UTC m=+0.158467262 container init 9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:38:45 np0005601226 podman[277371]: 2026-01-29 17:38:45.557627993 +0000 UTC m=+0.173993293 container start 9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:38:45 np0005601226 podman[277371]: 2026-01-29 17:38:45.562537357 +0000 UTC m=+0.178902667 container attach 9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 29 12:38:45 np0005601226 reverent_williams[277387]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:38:45 np0005601226 reverent_williams[277387]: --> All data devices are unavailable
Jan 29 12:38:46 np0005601226 systemd[1]: libpod-9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03.scope: Deactivated successfully.
Jan 29 12:38:46 np0005601226 podman[277371]: 2026-01-29 17:38:46.031063071 +0000 UTC m=+0.647428381 container died 9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:38:46 np0005601226 systemd[1]: var-lib-containers-storage-overlay-db9f4bfc6fa153e2c249690ae4d39e5922f0f61fef1d90b25aebfab277ed250e-merged.mount: Deactivated successfully.
Jan 29 12:38:46 np0005601226 podman[277371]: 2026-01-29 17:38:46.081862 +0000 UTC m=+0.698227310 container remove 9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williams, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:38:46 np0005601226 systemd[1]: libpod-conmon-9e0ccea69b98ffd0c0416caf7cc1ee3da5424feea157a3c8ee7dd19c7e5a6d03.scope: Deactivated successfully.
Jan 29 12:38:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 271 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.5 KiB/s wr, 59 op/s
Jan 29 12:38:46 np0005601226 podman[277482]: 2026-01-29 17:38:46.568317002 +0000 UTC m=+0.053818422 container create d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 29 12:38:46 np0005601226 systemd[1]: Started libpod-conmon-d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457.scope.
Jan 29 12:38:46 np0005601226 podman[277482]: 2026-01-29 17:38:46.544143666 +0000 UTC m=+0.029645126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:38:46 np0005601226 nova_compute[239456]: 2026-01-29 17:38:46.635 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:46 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:38:46 np0005601226 podman[277482]: 2026-01-29 17:38:46.666570238 +0000 UTC m=+0.152071648 container init d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bose, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 29 12:38:46 np0005601226 podman[277482]: 2026-01-29 17:38:46.675386377 +0000 UTC m=+0.160887797 container start d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bose, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:38:46 np0005601226 podman[277482]: 2026-01-29 17:38:46.679132209 +0000 UTC m=+0.164633609 container attach d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 12:38:46 np0005601226 interesting_bose[277498]: 167 167
Jan 29 12:38:46 np0005601226 systemd[1]: libpod-d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457.scope: Deactivated successfully.
Jan 29 12:38:46 np0005601226 podman[277482]: 2026-01-29 17:38:46.683435886 +0000 UTC m=+0.168937296 container died d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:38:46 np0005601226 nova_compute[239456]: 2026-01-29 17:38:46.725 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:46 np0005601226 systemd[1]: var-lib-containers-storage-overlay-01299f31493999a5dc8b59bca87d547997103e0fddfa379b6e8f9b80f2c31295-merged.mount: Deactivated successfully.
Jan 29 12:38:46 np0005601226 podman[277482]: 2026-01-29 17:38:46.75397505 +0000 UTC m=+0.239476470 container remove d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_bose, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:38:46 np0005601226 systemd[1]: libpod-conmon-d690e4c3c61fcc916dd6a5403c3e24e735817f24d6fc10b657d1788e5d4e1457.scope: Deactivated successfully.
Jan 29 12:38:46 np0005601226 nova_compute[239456]: 2026-01-29 17:38:46.845 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:46 np0005601226 podman[277524]: 2026-01-29 17:38:46.956454625 +0000 UTC m=+0.062806756 container create d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 29 12:38:47 np0005601226 systemd[1]: Started libpod-conmon-d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa.scope.
Jan 29 12:38:47 np0005601226 nova_compute[239456]: 2026-01-29 17:38:47.021 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:47 np0005601226 podman[277524]: 2026-01-29 17:38:46.932446214 +0000 UTC m=+0.038798345 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:38:47 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:38:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9062754e9edecd449767f4932e556beffaaf3ab78eb3cd8584b782db78a133ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9062754e9edecd449767f4932e556beffaaf3ab78eb3cd8584b782db78a133ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9062754e9edecd449767f4932e556beffaaf3ab78eb3cd8584b782db78a133ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:47 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9062754e9edecd449767f4932e556beffaaf3ab78eb3cd8584b782db78a133ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:47 np0005601226 podman[277524]: 2026-01-29 17:38:47.059288386 +0000 UTC m=+0.165640567 container init d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 12:38:47 np0005601226 podman[277524]: 2026-01-29 17:38:47.068386943 +0000 UTC m=+0.174739034 container start d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_stonebraker, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 29 12:38:47 np0005601226 podman[277524]: 2026-01-29 17:38:47.071509167 +0000 UTC m=+0.177861328 container attach d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]: {
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:    "0": [
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:        {
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "devices": [
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "/dev/loop3"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            ],
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_name": "ceph_lv0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_size": "21470642176",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "name": "ceph_lv0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "tags": {
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cluster_name": "ceph",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.crush_device_class": "",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.encrypted": "0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.objectstore": "bluestore",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osd_id": "0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.type": "block",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.vdo": "0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.with_tpm": "0"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            },
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "type": "block",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "vg_name": "ceph_vg0"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:        }
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:    ],
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:    "1": [
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:        {
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "devices": [
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "/dev/loop4"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            ],
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_name": "ceph_lv1",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_size": "21470642176",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "name": "ceph_lv1",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "tags": {
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cluster_name": "ceph",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.crush_device_class": "",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.encrypted": "0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.objectstore": "bluestore",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osd_id": "1",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.type": "block",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.vdo": "0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.with_tpm": "0"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            },
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "type": "block",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "vg_name": "ceph_vg1"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:        }
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:    ],
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:    "2": [
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:        {
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "devices": [
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "/dev/loop5"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            ],
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_name": "ceph_lv2",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_size": "21470642176",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "name": "ceph_lv2",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "tags": {
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.cluster_name": "ceph",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.crush_device_class": "",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.encrypted": "0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.objectstore": "bluestore",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osd_id": "2",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.type": "block",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.vdo": "0",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:                "ceph.with_tpm": "0"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            },
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "type": "block",
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:            "vg_name": "ceph_vg2"
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:        }
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]:    ]
Jan 29 12:38:47 np0005601226 distracted_stonebraker[277540]: }
Jan 29 12:38:47 np0005601226 systemd[1]: libpod-d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa.scope: Deactivated successfully.
Jan 29 12:38:47 np0005601226 podman[277524]: 2026-01-29 17:38:47.372640969 +0000 UTC m=+0.478993100 container died d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_stonebraker, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 29 12:38:47 np0005601226 systemd[1]: var-lib-containers-storage-overlay-9062754e9edecd449767f4932e556beffaaf3ab78eb3cd8584b782db78a133ca-merged.mount: Deactivated successfully.
Jan 29 12:38:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:47 np0005601226 podman[277524]: 2026-01-29 17:38:47.424756915 +0000 UTC m=+0.531109046 container remove d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_stonebraker, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 29 12:38:47 np0005601226 systemd[1]: libpod-conmon-d8a02a199969b0488b3c1367a5864b250dc3369fd04c4e3494e65ed35b4c70aa.scope: Deactivated successfully.
Jan 29 12:38:47 np0005601226 nova_compute[239456]: 2026-01-29 17:38:47.620 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:47 np0005601226 podman[277624]: 2026-01-29 17:38:47.932886073 +0000 UTC m=+0.053818731 container create d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_swanson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 12:38:47 np0005601226 systemd[1]: Started libpod-conmon-d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40.scope.
Jan 29 12:38:48 np0005601226 podman[277624]: 2026-01-29 17:38:47.905717707 +0000 UTC m=+0.026650415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:38:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:38:48 np0005601226 podman[277624]: 2026-01-29 17:38:48.021852058 +0000 UTC m=+0.142784706 container init d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_swanson, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:38:48 np0005601226 podman[277624]: 2026-01-29 17:38:48.030839462 +0000 UTC m=+0.151772120 container start d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_swanson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 29 12:38:48 np0005601226 podman[277624]: 2026-01-29 17:38:48.035289643 +0000 UTC m=+0.156222271 container attach d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 29 12:38:48 np0005601226 amazing_swanson[277641]: 167 167
Jan 29 12:38:48 np0005601226 systemd[1]: libpod-d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40.scope: Deactivated successfully.
Jan 29 12:38:48 np0005601226 conmon[277641]: conmon d5a13f36a7042d916ef3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40.scope/container/memory.events
Jan 29 12:38:48 np0005601226 podman[277624]: 2026-01-29 17:38:48.038818679 +0000 UTC m=+0.159751337 container died d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 29 12:38:48 np0005601226 systemd[1]: var-lib-containers-storage-overlay-ecf67fb60045cd184e3ee43f3f97774284a82e480571c83930b1b6d01f4410ce-merged.mount: Deactivated successfully.
Jan 29 12:38:48 np0005601226 podman[277624]: 2026-01-29 17:38:48.086855852 +0000 UTC m=+0.207788480 container remove d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_swanson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 29 12:38:48 np0005601226 systemd[1]: libpod-conmon-d5a13f36a7042d916ef3548cf265a44a00ce329958f6acb370c4ec794c1a0b40.scope: Deactivated successfully.
Jan 29 12:38:48 np0005601226 podman[277664]: 2026-01-29 17:38:48.24004935 +0000 UTC m=+0.047974473 container create fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 29 12:38:48 np0005601226 systemd[1]: Started libpod-conmon-fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502.scope.
Jan 29 12:38:48 np0005601226 nova_compute[239456]: 2026-01-29 17:38:48.285 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 271 MiB data, 636 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.3 KiB/s wr, 33 op/s
Jan 29 12:38:48 np0005601226 podman[277664]: 2026-01-29 17:38:48.218090044 +0000 UTC m=+0.026015207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:38:48 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:38:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6214476b07fc3f9bf5f791fb196162ed7001fa3122481c6e370de8a1ee39c717/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6214476b07fc3f9bf5f791fb196162ed7001fa3122481c6e370de8a1ee39c717/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6214476b07fc3f9bf5f791fb196162ed7001fa3122481c6e370de8a1ee39c717/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:48 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6214476b07fc3f9bf5f791fb196162ed7001fa3122481c6e370de8a1ee39c717/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:38:48 np0005601226 podman[277664]: 2026-01-29 17:38:48.34913023 +0000 UTC m=+0.157055343 container init fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:38:48 np0005601226 podman[277664]: 2026-01-29 17:38:48.361006552 +0000 UTC m=+0.168931665 container start fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:38:48 np0005601226 podman[277664]: 2026-01-29 17:38:48.365480913 +0000 UTC m=+0.173406076 container attach fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 29 12:38:48 np0005601226 lvm[277760]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:38:48 np0005601226 lvm[277760]: VG ceph_vg1 finished
Jan 29 12:38:48 np0005601226 lvm[277758]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:38:48 np0005601226 lvm[277758]: VG ceph_vg0 finished
Jan 29 12:38:48 np0005601226 lvm[277761]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:38:48 np0005601226 lvm[277761]: VG ceph_vg2 finished
Jan 29 12:38:49 np0005601226 hardcore_blackburn[277680]: {}
Jan 29 12:38:49 np0005601226 systemd[1]: libpod-fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502.scope: Deactivated successfully.
Jan 29 12:38:49 np0005601226 systemd[1]: libpod-fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502.scope: Consumed 1.038s CPU time.
Jan 29 12:38:49 np0005601226 podman[277664]: 2026-01-29 17:38:49.089579645 +0000 UTC m=+0.897504758 container died fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 29 12:38:49 np0005601226 systemd[1]: var-lib-containers-storage-overlay-6214476b07fc3f9bf5f791fb196162ed7001fa3122481c6e370de8a1ee39c717-merged.mount: Deactivated successfully.
Jan 29 12:38:49 np0005601226 podman[277664]: 2026-01-29 17:38:49.136353674 +0000 UTC m=+0.944278797 container remove fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:38:49 np0005601226 systemd[1]: libpod-conmon-fdf6f42851a95f28405070ca51a9a6616259cea5f4362190d992fb959f708502.scope: Deactivated successfully.
Jan 29 12:38:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:38:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:38:49 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:38:49 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:38:49 np0005601226 nova_compute[239456]: 2026-01-29 17:38:49.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:49 np0005601226 nova_compute[239456]: 2026-01-29 17:38:49.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:38:49 np0005601226 nova_compute[239456]: 2026-01-29 17:38:49.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:38:49 np0005601226 nova_compute[239456]: 2026-01-29 17:38:49.623 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:38:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:38:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:38:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:38:50 np0005601226 nova_compute[239456]: 2026-01-29 17:38:50.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.2937522282581463e-06 of space, bias 1.0, pg target 0.0006881256684774439 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002913302180434357 of space, bias 1.0, pg target 0.8739906541303072 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.956675218543776e-06 of space, bias 1.0, pg target 0.0014870025655631326 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669350587010593 of space, bias 1.0, pg target 0.2000805176103178 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.452153429370564e-06 of space, bias 4.0, pg target 0.0017425841152446768 quantized to 16 (current 16)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:38:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:38:51 np0005601226 nova_compute[239456]: 2026-01-29 17:38:51.902 239460 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769708316.9014182, 56cf922f-31d1-4f48-8716-abdd2671978f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 29 12:38:51 np0005601226 nova_compute[239456]: 2026-01-29 17:38:51.903 239460 INFO nova.compute.manager [-] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] VM Stopped (Lifecycle Event)#033[00m
Jan 29 12:38:51 np0005601226 nova_compute[239456]: 2026-01-29 17:38:51.923 239460 DEBUG nova.compute.manager [None req-ca7515d3-7451-4631-a952-591be133f0fe - - - - - -] [instance: 56cf922f-31d1-4f48-8716-abdd2671978f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 29 12:38:52 np0005601226 nova_compute[239456]: 2026-01-29 17:38:52.053 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:38:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:53 np0005601226 nova_compute[239456]: 2026-01-29 17:38:53.287 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:38:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:38:57 np0005601226 nova_compute[239456]: 2026-01-29 17:38:57.107 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:38:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:38:57 np0005601226 nova_compute[239456]: 2026-01-29 17:38:57.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:38:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:38:58 np0005601226 nova_compute[239456]: 2026-01-29 17:38:58.339 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:02 np0005601226 nova_compute[239456]: 2026-01-29 17:39:02.111 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:03 np0005601226 nova_compute[239456]: 2026-01-29 17:39:03.341 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:07 np0005601226 nova_compute[239456]: 2026-01-29 17:39:07.154 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:08 np0005601226 nova_compute[239456]: 2026-01-29 17:39:08.385 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:39:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:39:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:39:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:39:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:39:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:39:11 np0005601226 podman[277802]: 2026-01-29 17:39:11.905185213 +0000 UTC m=+0.078537433 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:39:11 np0005601226 podman[277803]: 2026-01-29 17:39:11.936679727 +0000 UTC m=+0.105172975 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 29 12:39:12 np0005601226 nova_compute[239456]: 2026-01-29 17:39:12.196 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:13 np0005601226 nova_compute[239456]: 2026-01-29 17:39:13.404 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:39:14 np0005601226 ceph-mon[75233]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8629 writes, 39K keys, 8629 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8629 writes, 8629 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1714 writes, 7935 keys, 1714 commit groups, 1.0 writes per commit group, ingest: 10.72 MB, 0.02 MB/s#012Interval WAL: 1714 writes, 1714 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     25.3      1.90              0.11        22    0.086       0      0       0.0       0.0#012  L6      1/0   12.22 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0     65.9     55.4      3.43              0.43        21    0.163    117K    12K       0.0       0.0#012 Sum      1/0   12.22 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0     42.4     44.7      5.33              0.53        43    0.124    117K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4    111.8    116.1      0.67              0.21        12    0.056     43K   3595       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     65.9     55.4      3.43              0.43        21    0.163    117K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     25.4      1.89              0.11        21    0.090       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      5.3      0.01              0.00         1    0.011       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.08 MB/s read, 5.3 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d2b32758d0#2 capacity: 304.00 MB usage: 25.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000263 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1700,24.81 MB,8.16062%) FilterBlock(44,339.55 KB,0.109075%) IndexBlock(44,659.62 KB,0.211896%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 29 12:39:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:17 np0005601226 nova_compute[239456]: 2026-01-29 17:39:17.199 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:18 np0005601226 nova_compute[239456]: 2026-01-29 17:39:18.406 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:21 np0005601226 ovn_controller[145556]: 2026-01-29T17:39:21Z|00280|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Jan 29 12:39:22 np0005601226 nova_compute[239456]: 2026-01-29 17:39:22.244 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:23 np0005601226 nova_compute[239456]: 2026-01-29 17:39:23.410 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:27 np0005601226 nova_compute[239456]: 2026-01-29 17:39:27.294 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:28 np0005601226 nova_compute[239456]: 2026-01-29 17:39:28.410 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 29 12:39:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/911345108' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 29 12:39:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 29 12:39:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/911345108' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 29 12:39:32 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:32 np0005601226 nova_compute[239456]: 2026-01-29 17:39:32.339 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:32 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:33 np0005601226 nova_compute[239456]: 2026-01-29 17:39:33.459 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:34 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:36 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:36 np0005601226 nova_compute[239456]: 2026-01-29 17:39:36.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:36 np0005601226 nova_compute[239456]: 2026-01-29 17:39:36.604 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 29 12:39:37 np0005601226 nova_compute[239456]: 2026-01-29 17:39:37.352 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:37 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:38 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:38 np0005601226 nova_compute[239456]: 2026-01-29 17:39:38.508 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:39:40.303 155625 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:39:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:39:40.304 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:39:40 np0005601226 ovn_metadata_agent[155620]: 2026-01-29 17:39:40.304 155625 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Optimize plan auto_2026-01-29_17:39:40
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] do_upmap
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'images', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [balancer INFO root] prepared 0/10 upmap changes
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:39:40 np0005601226 ceph-mgr[75527]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 29 12:39:42 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.390 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:42 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.599 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.619 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.642 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.643 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.643 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.643 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 29 12:39:42 np0005601226 nova_compute[239456]: 2026-01-29 17:39:42.644 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:39:42 np0005601226 podman[277870]: 2026-01-29 17:39:42.865183607 +0000 UTC m=+0.040817048 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:39:42 np0005601226 podman[277871]: 2026-01-29 17:39:42.898923893 +0000 UTC m=+0.070477394 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 29 12:39:43 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:39:43 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726765188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.190 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.320 239460 WARNING nova.virt.libvirt.driver [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.321 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4319MB free_disk=59.98814365174621GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.322 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.322 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.374 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.375 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.510 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:43 np0005601226 nova_compute[239456]: 2026-01-29 17:39:43.554 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 29 12:39:44 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 29 12:39:44 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477057108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 29 12:39:44 np0005601226 nova_compute[239456]: 2026-01-29 17:39:44.096 239460 DEBUG oslo_concurrency.processutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 29 12:39:44 np0005601226 nova_compute[239456]: 2026-01-29 17:39:44.103 239460 DEBUG nova.compute.provider_tree [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed in ProviderTree for provider: 79259295-532c-4a51-8f50-027529735b0c update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 29 12:39:44 np0005601226 nova_compute[239456]: 2026-01-29 17:39:44.121 239460 DEBUG nova.scheduler.client.report [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Inventory has not changed for provider 79259295-532c-4a51-8f50-027529735b0c based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 29 12:39:44 np0005601226 nova_compute[239456]: 2026-01-29 17:39:44.123 239460 DEBUG nova.compute.resource_tracker [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 29 12:39:44 np0005601226 nova_compute[239456]: 2026-01-29 17:39:44.124 239460 DEBUG oslo_concurrency.lockutils [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 29 12:39:44 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:46 np0005601226 nova_compute[239456]: 2026-01-29 17:39:46.110 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:46 np0005601226 nova_compute[239456]: 2026-01-29 17:39:46.110 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:46 np0005601226 nova_compute[239456]: 2026-01-29 17:39:46.111 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:46 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:47 np0005601226 nova_compute[239456]: 2026-01-29 17:39:47.391 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:47 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:48 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:48 np0005601226 nova_compute[239456]: 2026-01-29 17:39:48.513 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:48 np0005601226 nova_compute[239456]: 2026-01-29 17:39:48.600 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:39:50 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 29 12:39:50 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:50 np0005601226 podman[278084]: 2026-01-29 17:39:50.491698878 +0000 UTC m=+0.022311857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:39:50 np0005601226 nova_compute[239456]: 2026-01-29 17:39:50.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:50 np0005601226 podman[278084]: 2026-01-29 17:39:50.613334329 +0000 UTC m=+0.143947268 container create 739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:39:50 np0005601226 systemd[1]: Started libpod-conmon-739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab.scope.
Jan 29 12:39:50 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:39:50 np0005601226 podman[278084]: 2026-01-29 17:39:50.755947608 +0000 UTC m=+0.286560587 container init 739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 29 12:39:50 np0005601226 podman[278084]: 2026-01-29 17:39:50.762032414 +0000 UTC m=+0.292645353 container start 739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_solomon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 29 12:39:50 np0005601226 agitated_solomon[278100]: 167 167
Jan 29 12:39:50 np0005601226 systemd[1]: libpod-739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab.scope: Deactivated successfully.
Jan 29 12:39:50 np0005601226 podman[278084]: 2026-01-29 17:39:50.804835975 +0000 UTC m=+0.335448904 container attach 739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_solomon, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:39:50 np0005601226 podman[278084]: 2026-01-29 17:39:50.808169356 +0000 UTC m=+0.338782305 container died 739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 29 12:39:51 np0005601226 systemd[1]: var-lib-containers-storage-overlay-558166a420e797c4ad3819b3776d41857fa45875cc1a17f883379aefdadfbaa8-merged.mount: Deactivated successfully.
Jan 29 12:39:51 np0005601226 podman[278084]: 2026-01-29 17:39:51.497579445 +0000 UTC m=+1.028192374 container remove 739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:39:51 np0005601226 systemd[1]: libpod-conmon-739d8e9f2f019b1ded4d37d7222f04cd4f68ea74952769d559e5353f87c847ab.scope: Deactivated successfully.
Jan 29 12:39:51 np0005601226 nova_compute[239456]: 2026-01-29 17:39:51.604 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:39:51 np0005601226 nova_compute[239456]: 2026-01-29 17:39:51.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 29 12:39:51 np0005601226 nova_compute[239456]: 2026-01-29 17:39:51.605 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 29 12:39:51 np0005601226 nova_compute[239456]: 2026-01-29 17:39:51.625 239460 DEBUG nova.compute.manager [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] _maybe_adjust
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.2937522282581463e-06 of space, bias 1.0, pg target 0.0006881256684774439 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002913302180434357 of space, bias 1.0, pg target 0.8739906541303072 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.956675218543776e-06 of space, bias 1.0, pg target 0.0014870025655631326 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006669350587010593 of space, bias 1.0, pg target 0.2000805176103178 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.452153429370564e-06 of space, bias 4.0, pg target 0.0017425841152446768 quantized to 16 (current 16)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 29 12:39:51 np0005601226 ceph-mgr[75527]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 29 12:39:51 np0005601226 podman[278125]: 2026-01-29 17:39:51.762392312 +0000 UTC m=+0.108036573 container create 9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:39:51 np0005601226 podman[278125]: 2026-01-29 17:39:51.68788376 +0000 UTC m=+0.033528071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:39:51 np0005601226 systemd[1]: Started libpod-conmon-9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7.scope.
Jan 29 12:39:51 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:39:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f63dfb5774c77d4f31243bd3452a275cbfcbff2d8a2f289740a7f4a0a2761c23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f63dfb5774c77d4f31243bd3452a275cbfcbff2d8a2f289740a7f4a0a2761c23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f63dfb5774c77d4f31243bd3452a275cbfcbff2d8a2f289740a7f4a0a2761c23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f63dfb5774c77d4f31243bd3452a275cbfcbff2d8a2f289740a7f4a0a2761c23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:51 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f63dfb5774c77d4f31243bd3452a275cbfcbff2d8a2f289740a7f4a0a2761c23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:51 np0005601226 podman[278125]: 2026-01-29 17:39:51.977403667 +0000 UTC m=+0.323047978 container init 9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 29 12:39:51 np0005601226 podman[278125]: 2026-01-29 17:39:51.988360145 +0000 UTC m=+0.334004406 container start 9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 12:39:52 np0005601226 podman[278125]: 2026-01-29 17:39:52.042311208 +0000 UTC m=+0.387955519 container attach 9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 29 12:39:52 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:52 np0005601226 cranky_heisenberg[278141]: --> passed data devices: 0 physical, 3 LVM
Jan 29 12:39:52 np0005601226 cranky_heisenberg[278141]: --> All data devices are unavailable
Jan 29 12:39:52 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:52 np0005601226 nova_compute[239456]: 2026-01-29 17:39:52.438 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:52 np0005601226 systemd[1]: libpod-9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7.scope: Deactivated successfully.
Jan 29 12:39:52 np0005601226 podman[278125]: 2026-01-29 17:39:52.441954525 +0000 UTC m=+0.787598766 container died 9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_heisenberg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:39:52 np0005601226 systemd[1]: var-lib-containers-storage-overlay-f63dfb5774c77d4f31243bd3452a275cbfcbff2d8a2f289740a7f4a0a2761c23-merged.mount: Deactivated successfully.
Jan 29 12:39:52 np0005601226 podman[278125]: 2026-01-29 17:39:52.592717096 +0000 UTC m=+0.938361357 container remove 9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 29 12:39:52 np0005601226 systemd[1]: libpod-conmon-9ceede1a473f32da3533e033331a2ba2e27f3209879847f9b96e2320e5fe24f7.scope: Deactivated successfully.
Jan 29 12:39:53 np0005601226 podman[278235]: 2026-01-29 17:39:53.06966851 +0000 UTC m=+0.046913635 container create f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 29 12:39:53 np0005601226 systemd[1]: Started libpod-conmon-f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb.scope.
Jan 29 12:39:53 np0005601226 podman[278235]: 2026-01-29 17:39:53.04572312 +0000 UTC m=+0.022968285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:39:53 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:39:53 np0005601226 podman[278235]: 2026-01-29 17:39:53.420667866 +0000 UTC m=+0.397913001 container init f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 29 12:39:53 np0005601226 podman[278235]: 2026-01-29 17:39:53.429085524 +0000 UTC m=+0.406330649 container start f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:39:53 np0005601226 sweet_mestorf[278251]: 167 167
Jan 29 12:39:53 np0005601226 systemd[1]: libpod-f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb.scope: Deactivated successfully.
Jan 29 12:39:53 np0005601226 podman[278235]: 2026-01-29 17:39:53.477695563 +0000 UTC m=+0.454940698 container attach f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 29 12:39:53 np0005601226 podman[278235]: 2026-01-29 17:39:53.478481054 +0000 UTC m=+0.455726189 container died f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 29 12:39:53 np0005601226 nova_compute[239456]: 2026-01-29 17:39:53.529 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:53 np0005601226 systemd[1]: var-lib-containers-storage-overlay-a6351e1eb3f709862eaee9f7cbc42518749ba2effa3079bb82e4fde426fdbaa8-merged.mount: Deactivated successfully.
Jan 29 12:39:53 np0005601226 podman[278235]: 2026-01-29 17:39:53.85786227 +0000 UTC m=+0.835107395 container remove f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mestorf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 29 12:39:53 np0005601226 systemd[1]: libpod-conmon-f0325921ac22d49b4646587577267a339c3aad2998bd9a7a2e264a7d8a8d11fb.scope: Deactivated successfully.
Jan 29 12:39:54 np0005601226 podman[278277]: 2026-01-29 17:39:54.049906752 +0000 UTC m=+0.052758083 container create 7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_feistel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 29 12:39:54 np0005601226 systemd[1]: Started libpod-conmon-7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1.scope.
Jan 29 12:39:54 np0005601226 podman[278277]: 2026-01-29 17:39:54.023416373 +0000 UTC m=+0.026267784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:39:54 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:39:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93af5321cfb09f7acebd152a31a1d17df051504e0465da131530d1437965d7ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93af5321cfb09f7acebd152a31a1d17df051504e0465da131530d1437965d7ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93af5321cfb09f7acebd152a31a1d17df051504e0465da131530d1437965d7ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:54 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93af5321cfb09f7acebd152a31a1d17df051504e0465da131530d1437965d7ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:54 np0005601226 podman[278277]: 2026-01-29 17:39:54.164768869 +0000 UTC m=+0.167620210 container init 7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 29 12:39:54 np0005601226 podman[278277]: 2026-01-29 17:39:54.170849834 +0000 UTC m=+0.173701195 container start 7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 29 12:39:54 np0005601226 podman[278277]: 2026-01-29 17:39:54.174502833 +0000 UTC m=+0.177354174 container attach 7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_feistel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 29 12:39:54 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]: {
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:    "0": [
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:        {
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "devices": [
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "/dev/loop3"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            ],
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_name": "ceph_lv0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_size": "21470642176",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "name": "ceph_lv0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "tags": {
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.block_uuid": "QErhvu-vGxB-M6dv-nB4J-6A0p-8cb9-jO73zg",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cluster_name": "ceph",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.crush_device_class": "",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.encrypted": "0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.objectstore": "bluestore",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osd_fsid": "0a72b3f7-dd96-4f5e-89bd-a2aa67c7b3b1",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osd_id": "0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.type": "block",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.vdo": "0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.with_tpm": "0"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            },
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "type": "block",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "vg_name": "ceph_vg0"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:        }
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:    ],
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:    "1": [
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:        {
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "devices": [
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "/dev/loop4"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            ],
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_name": "ceph_lv1",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_size": "21470642176",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=b59b9ee3-7bef-4274-a8bf-0f9cce011ae7,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "name": "ceph_lv1",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "path": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "tags": {
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.block_uuid": "Pe1vIu-K0d8-cEYP-IVea-UIDd-2EeB-fuhyKy",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cluster_name": "ceph",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.crush_device_class": "",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.encrypted": "0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.objectstore": "bluestore",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osd_fsid": "b59b9ee3-7bef-4274-a8bf-0f9cce011ae7",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osd_id": "1",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.type": "block",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.vdo": "0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.with_tpm": "0"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            },
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "type": "block",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "vg_name": "ceph_vg1"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:        }
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:    ],
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:    "2": [
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:        {
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "devices": [
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "/dev/loop5"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            ],
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_name": "ceph_lv2",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_size": "21470642176",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cc5c72e3-31e0-58b9-8731-456117d38f4a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=791d3808-828d-4f85-a3de-28df49f6a6ef,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "lv_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "name": "ceph_lv2",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "path": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "tags": {
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.block_uuid": "u7grr7-YOaz-NR6T-oyRR-fOdD-vZZF-uvRrQ6",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cephx_lockbox_secret": "",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cluster_fsid": "cc5c72e3-31e0-58b9-8731-456117d38f4a",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.cluster_name": "ceph",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.crush_device_class": "",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.encrypted": "0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.objectstore": "bluestore",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osd_fsid": "791d3808-828d-4f85-a3de-28df49f6a6ef",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osd_id": "2",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.type": "block",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.vdo": "0",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:                "ceph.with_tpm": "0"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            },
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "type": "block",
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:            "vg_name": "ceph_vg2"
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:        }
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]:    ]
Jan 29 12:39:54 np0005601226 quizzical_feistel[278293]: }
Jan 29 12:39:54 np0005601226 systemd[1]: libpod-7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1.scope: Deactivated successfully.
Jan 29 12:39:54 np0005601226 podman[278277]: 2026-01-29 17:39:54.452181719 +0000 UTC m=+0.455033070 container died 7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_feistel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 29 12:39:54 np0005601226 systemd[1]: var-lib-containers-storage-overlay-93af5321cfb09f7acebd152a31a1d17df051504e0465da131530d1437965d7ca-merged.mount: Deactivated successfully.
Jan 29 12:39:54 np0005601226 podman[278277]: 2026-01-29 17:39:54.495399122 +0000 UTC m=+0.498250503 container remove 7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:39:54 np0005601226 systemd[1]: libpod-conmon-7ee5dfcccad98f31a2ea8aa6ab6133798b01d11ff84478679231f305628fb7f1.scope: Deactivated successfully.
Jan 29 12:39:54 np0005601226 podman[278374]: 2026-01-29 17:39:54.951147441 +0000 UTC m=+0.045887207 container create 22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_easley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 29 12:39:54 np0005601226 systemd[1]: Started libpod-conmon-22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2.scope.
Jan 29 12:39:55 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:39:55 np0005601226 podman[278374]: 2026-01-29 17:39:55.023822143 +0000 UTC m=+0.118561989 container init 22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_easley, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:39:55 np0005601226 podman[278374]: 2026-01-29 17:39:55.027605435 +0000 UTC m=+0.122345201 container start 22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_easley, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 29 12:39:55 np0005601226 confident_easley[278391]: 167 167
Jan 29 12:39:55 np0005601226 systemd[1]: libpod-22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2.scope: Deactivated successfully.
Jan 29 12:39:55 np0005601226 podman[278374]: 2026-01-29 17:39:54.935704822 +0000 UTC m=+0.030444618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:39:55 np0005601226 podman[278374]: 2026-01-29 17:39:55.031049909 +0000 UTC m=+0.125789665 container attach 22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 29 12:39:55 np0005601226 podman[278374]: 2026-01-29 17:39:55.031819069 +0000 UTC m=+0.126558845 container died 22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 29 12:39:55 np0005601226 systemd[1]: var-lib-containers-storage-overlay-02086d6fbccc3b30d55e015bdfad2f7f930708656bfe38f0b4a30e3852279528-merged.mount: Deactivated successfully.
Jan 29 12:39:55 np0005601226 podman[278374]: 2026-01-29 17:39:55.065323519 +0000 UTC m=+0.160063275 container remove 22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 29 12:39:55 np0005601226 systemd[1]: libpod-conmon-22646a38efe9f5bc637d0fe8442c6c88b9d9764fd9efe123228ff490abac3be2.scope: Deactivated successfully.
Jan 29 12:39:55 np0005601226 podman[278415]: 2026-01-29 17:39:55.24886157 +0000 UTC m=+0.068444629 container create 394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_volhard, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 29 12:39:55 np0005601226 systemd[1]: Started libpod-conmon-394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30.scope.
Jan 29 12:39:55 np0005601226 systemd[1]: Started libcrun container.
Jan 29 12:39:55 np0005601226 podman[278415]: 2026-01-29 17:39:55.220635984 +0000 UTC m=+0.040219123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 29 12:39:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4272a2c26d345b1b1a0954b571f95125640d8a0af8fbe641057f2c8246bb695f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4272a2c26d345b1b1a0954b571f95125640d8a0af8fbe641057f2c8246bb695f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4272a2c26d345b1b1a0954b571f95125640d8a0af8fbe641057f2c8246bb695f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:55 np0005601226 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4272a2c26d345b1b1a0954b571f95125640d8a0af8fbe641057f2c8246bb695f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 29 12:39:55 np0005601226 podman[278415]: 2026-01-29 17:39:55.32738513 +0000 UTC m=+0.146968209 container init 394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 29 12:39:55 np0005601226 podman[278415]: 2026-01-29 17:39:55.335355367 +0000 UTC m=+0.154938406 container start 394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_volhard, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 29 12:39:55 np0005601226 podman[278415]: 2026-01-29 17:39:55.338340228 +0000 UTC m=+0.157923317 container attach 394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 29 12:39:55 np0005601226 lvm[278511]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:39:55 np0005601226 lvm[278510]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:39:55 np0005601226 lvm[278510]: VG ceph_vg0 finished
Jan 29 12:39:55 np0005601226 lvm[278511]: VG ceph_vg1 finished
Jan 29 12:39:55 np0005601226 lvm[278513]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:39:55 np0005601226 lvm[278513]: VG ceph_vg2 finished
Jan 29 12:39:56 np0005601226 objective_volhard[278432]: {}
Jan 29 12:39:56 np0005601226 systemd[1]: libpod-394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30.scope: Deactivated successfully.
Jan 29 12:39:56 np0005601226 podman[278415]: 2026-01-29 17:39:56.090567872 +0000 UTC m=+0.910150921 container died 394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_volhard, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 29 12:39:56 np0005601226 systemd[1]: libpod-394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30.scope: Consumed 1.186s CPU time.
Jan 29 12:39:56 np0005601226 systemd[1]: var-lib-containers-storage-overlay-4272a2c26d345b1b1a0954b571f95125640d8a0af8fbe641057f2c8246bb695f-merged.mount: Deactivated successfully.
Jan 29 12:39:56 np0005601226 podman[278415]: 2026-01-29 17:39:56.132442009 +0000 UTC m=+0.952025058 container remove 394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_volhard, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 29 12:39:56 np0005601226 systemd[1]: libpod-conmon-394fbb0273e22085a96e0d1931a4d0d6c86cd6658912dfaeca3f9e06e8682d30.scope: Deactivated successfully.
Jan 29 12:39:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 29 12:39:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:39:56 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 29 12:39:56 np0005601226 ceph-mon[75233]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:39:56 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:39:57 np0005601226 ceph-mon[75233]: from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' 
Jan 29 12:39:57 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:39:57 np0005601226 nova_compute[239456]: 2026-01-29 17:39:57.440 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:58 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:39:58 np0005601226 nova_compute[239456]: 2026-01-29 17:39:58.576 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:39:58 np0005601226 nova_compute[239456]: 2026-01-29 17:39:58.603 239460 DEBUG oslo_service.periodic_task [None req-7a90bef2-6b5c-4f9f-bb15-9c7c96061c7f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 29 12:40:00 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:02 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:02 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:40:02 np0005601226 nova_compute[239456]: 2026-01-29 17:40:02.484 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:03 np0005601226 nova_compute[239456]: 2026-01-29 17:40:03.576 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:04 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:06 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:07 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:40:07 np0005601226 nova_compute[239456]: 2026-01-29 17:40:07.487 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:08 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:08 np0005601226 systemd-logind[823]: New session 52 of user zuul.
Jan 29 12:40:08 np0005601226 systemd[1]: Started Session 52 of User zuul.
Jan 29 12:40:08 np0005601226 nova_compute[239456]: 2026-01-29 17:40:08.577 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:10 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:40:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:40:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:40:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:40:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] scanning for idle connections..
Jan 29 12:40:10 np0005601226 ceph-mgr[75527]: [volumes INFO mgr_util] cleaning up connections: []
Jan 29 12:40:11 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19136 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:12 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19138 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:12 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:40:12 np0005601226 nova_compute[239456]: 2026-01-29 17:40:12.489 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:12 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 29 12:40:12 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765200170' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 29 12:40:13 np0005601226 podman[278808]: 2026-01-29 17:40:13.417972471 +0000 UTC m=+0.063442402 container health_status 54434aca3a541b2c4d99fb0dfa6025fedfe39ef8c658db3d16fed70286d2c205 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 29 12:40:13 np0005601226 podman[278809]: 2026-01-29 17:40:13.452858508 +0000 UTC m=+0.098389951 container health_status 887bffc4e380bb451a2d4d6b80c93382eb273d4c381f29ff7b3ec512b083a78a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '21abfa0a87b9b13561913073548a92ec6b2d7c36bea7c9e38cba6e311fc93e7d-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4-27b9a7fb0322f74c1b5a26fab82a86f919468039c5de0497f0e2470e8012cff4'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 29 12:40:13 np0005601226 nova_compute[239456]: 2026-01-29 17:40:13.580 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:14 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:16 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:17 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:40:17 np0005601226 nova_compute[239456]: 2026-01-29 17:40:17.493 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:18 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:18 np0005601226 ovs-vsctl[278927]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 29 12:40:18 np0005601226 nova_compute[239456]: 2026-01-29 17:40:18.581 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:19 np0005601226 virtqemud[239322]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 29 12:40:19 np0005601226 virtqemud[239322]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 29 12:40:19 np0005601226 virtqemud[239322]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 29 12:40:19 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: cache status {prefix=cache status} (starting...)
Jan 29 12:40:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: client ls {prefix=client ls} (starting...)
Jan 29 12:40:20 np0005601226 lvm[279250]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 29 12:40:20 np0005601226 lvm[279250]: VG ceph_vg2 finished
Jan 29 12:40:20 np0005601226 lvm[279276]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 29 12:40:20 np0005601226 lvm[279276]: VG ceph_vg0 finished
Jan 29 12:40:20 np0005601226 lvm[279281]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 29 12:40:20 np0005601226 lvm[279281]: VG ceph_vg1 finished
Jan 29 12:40:20 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:20 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19142 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: damage ls {prefix=damage ls} (starting...)
Jan 29 12:40:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: dump loads {prefix=dump loads} (starting...)
Jan 29 12:40:20 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 29 12:40:20 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19144 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:21 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 29 12:40:21 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 29 12:40:21 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 29 12:40:21 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19148 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 29 12:40:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392401506' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 29 12:40:21 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 29 12:40:21 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 29 12:40:21 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 29 12:40:21 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/775335685' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 29 12:40:21 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19150 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:21 np0005601226 ceph-mgr[75527]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 12:40:21 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: 2026-01-29T17:40:21.892+0000 7fa574c41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 12:40:22 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: ops {prefix=ops} (starting...)
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825569983' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 29 12:40:22 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2899517882' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:40:22 np0005601226 nova_compute[239456]: 2026-01-29 17:40:22.496 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:22 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: session ls {prefix=session ls} (starting...)
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/6529250' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 29 12:40:22 np0005601226 ceph-mds[96568]: mds.cephfs.compute-0.cflubi asok_command: status {prefix=status} (starting...)
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 29 12:40:22 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438502636' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 12:40:23 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19162 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 29 12:40:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357812564' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 12:40:23 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19166 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:23 np0005601226 nova_compute[239456]: 2026-01-29 17:40:23.582 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:23 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 29 12:40:23 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866717250' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474569581' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 29 12:40:24 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456961781' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3924409688' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 29 12:40:24 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088470903' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 29 12:40:25 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19178 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:25 np0005601226 ceph-mgr[75527]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 29 12:40:25 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: 2026-01-29T17:40:25.054+0000 7fa574c41640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 29 12:40:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 29 12:40:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524398887' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 12:40:25 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 29 12:40:25 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2967005144' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 29 12:40:25 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19184 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 174 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 175 heartbeat osd_stat(store_statfs(0x4f9ffe000/0x0/0x4ffc00000, data 0x1d84e97/0x1e8b000, compress 0x0/0x0/0x0, omap 0x1a6a8, meta 0x3d55958), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 10887168 heap: 120627200 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.376245499s of 10.643681526s, submitted: 8
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122396672 unmapped: 15024128 heap: 137420800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 110788608 unmapped: 47636480 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f73fa000/0x0/0x4ffc00000, data 0x49886e4/0x4a92000, compress 0x0/0x0/0x0, omap 0x1adf5, meta 0x3d5520b), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 34988032 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1678103 data_alloc: 234881024 data_used: 19348933
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 43360256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115097600 unmapped: 43327488 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f5ffa000/0x0/0x4ffc00000, data 0x5d886e4/0x5e92000, compress 0x0/0x0/0x0, omap 0x1adf5, meta 0x3d5520b), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 176 ms_handle_reset con 0x55a68c36e000 session 0x55a68c769500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119291904 unmapped: 39133184 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f5ff9000/0x0/0x4ffc00000, data 0x5d886f4/0x5e93000, compress 0x0/0x0/0x0, omap 0x1adf5, meta 0x3d5520b), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115097600 unmapped: 43327488 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 47505408 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 176 heartbeat osd_stat(store_statfs(0x4f4ff8000/0x0/0x4ffc00000, data 0x6d88756/0x6e94000, compress 0x0/0x0/0x0, omap 0x1adf5, meta 0x3d5520b), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1788821 data_alloc: 234881024 data_used: 19348933
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 43311104 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 47505408 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.887361288s of 10.154434204s, submitted: 61
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127868928 unmapped: 30556160 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 177 heartbeat osd_stat(store_statfs(0x4f3ff3000/0x0/0x4ffc00000, data 0x7d8a22c/0x7e97000, compress 0x0/0x0/0x0, omap 0x1b0a7, meta 0x3d54f59), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115302400 unmapped: 43122688 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115449856 unmapped: 42975232 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2054844 data_alloc: 234881024 data_used: 19349031
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 124059648 unmapped: 34365440 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 38502400 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 111542272 unmapped: 46882816 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 178 ms_handle_reset con 0x55a68c36f000 session 0x55a68cdff500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 178 heartbeat osd_stat(store_statfs(0x4f17f2000/0x0/0x4ffc00000, data 0xa58be1f/0xa69a000, compress 0x0/0x0/0x0, omap 0x1b35b, meta 0x3d54ca5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 124248064 unmapped: 34177024 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 38313984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2244916 data_alloc: 234881024 data_used: 19349047
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115957760 unmapped: 42467328 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112001024 unmapped: 46424064 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 179 ms_handle_reset con 0x55a68caa9c00 session 0x55a68d0c7a40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.602899313s of 10.180491447s, submitted: 34
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 46325760 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 179 ms_handle_reset con 0x55a68cdb8000 session 0x55a68c320380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 41795584 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 179 heartbeat osd_stat(store_statfs(0x4ecfed000/0x0/0x4ffc00000, data 0xed8da12/0xee9d000, compress 0x0/0x0/0x0, omap 0x1b611, meta 0x3d549ef), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 180 ms_handle_reset con 0x55a68cdb8400 session 0x55a68c320700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112631808 unmapped: 45793280 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 180 heartbeat osd_stat(store_statfs(0x4eb7ea000/0x0/0x4ffc00000, data 0x1058f659/0x106a0000, compress 0x0/0x0/0x0, omap 0x1b8c9, meta 0x3d54737), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 180 ms_handle_reset con 0x55a68cdb8400 session 0x55a68b6601c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747704 data_alloc: 234881024 data_used: 19349047
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 180 heartbeat osd_stat(store_statfs(0x4eb3ea000/0x0/0x4ffc00000, data 0x1098f659/0x10aa0000, compress 0x0/0x0/0x0, omap 0x1b8c9, meta 0x3d54737), peers [0,1] op hist [0,0,0,0,0,1,2,2])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 28876800 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112861184 unmapped: 45563904 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 181 ms_handle_reset con 0x55a68c36e000 session 0x55a68c321dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 181 ms_handle_reset con 0x55a68c36f000 session 0x55a68b599180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 45522944 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 45309952 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 181 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0ed880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 181 ms_handle_reset con 0x55a68caa9c00 session 0x55a68cdfe1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 181 ms_handle_reset con 0x55a68b297400 session 0x55a68d0c6380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 181 heartbeat osd_stat(store_statfs(0x4e7fe7000/0x0/0x4ffc00000, data 0x13d912a0/0x13ea3000, compress 0x0/0x0/0x0, omap 0x1baa5, meta 0x3d5455b), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113115136 unmapped: 45309952 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2976253 data_alloc: 234881024 data_used: 19348933
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 45244416 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 182 ms_handle_reset con 0x55a689eb2c00 session 0x55a68c28c1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112893952 unmapped: 45531136 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 182 ms_handle_reset con 0x55a68caa9c00 session 0x55a68c3208c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.791598797s of 10.087923050s, submitted: 168
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112623616 unmapped: 45801472 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 183 ms_handle_reset con 0x55a68c36e000 session 0x55a68a8008c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 183 heartbeat osd_stat(store_statfs(0x4f9fe6000/0x0/0x4ffc00000, data 0x1d92d20/0x1ea4000, compress 0x0/0x0/0x0, omap 0x1bf96, meta 0x3d5406a), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 45776896 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 183 ms_handle_reset con 0x55a68c36f000 session 0x55a68a801180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 184 heartbeat osd_stat(store_statfs(0x4f9fe1000/0x0/0x4ffc00000, data 0x1d9492f/0x1ea7000, compress 0x0/0x0/0x0, omap 0x1c253, meta 0x3d53dad), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 45776896 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1460457 data_alloc: 234881024 data_used: 19348933
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 184 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0eddc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 45776896 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 45776896 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 185 ms_handle_reset con 0x55a68b297400 session 0x55a68d0ec8c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 185 heartbeat osd_stat(store_statfs(0x4f9fdc000/0x0/0x4ffc00000, data 0x1d98078/0x1eae000, compress 0x0/0x0/0x0, omap 0x1c7d3, meta 0x3d5382d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 185 handle_osd_map epochs [185,186], i have 186, src has [1,186]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 45760512 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 186 ms_handle_reset con 0x55a68c36e000 session 0x55a68a3faa80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112664576 unmapped: 45760512 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 186 ms_handle_reset con 0x55a68caa9c00 session 0x55a68b532e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 45613056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 186 heartbeat osd_stat(store_statfs(0x4f9fd7000/0x0/0x4ffc00000, data 0x1d99ccf/0x1eb1000, compress 0x0/0x0/0x0, omap 0x1c9b3, meta 0x3d5364d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 186 ms_handle_reset con 0x55a68cdb8400 session 0x55a68a822fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466453 data_alloc: 234881024 data_used: 19348949
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 45613056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 187 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b188000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 45613056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 45613056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.893760681s of 10.664787292s, submitted: 47
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 187 heartbeat osd_stat(store_statfs(0x4f9fd6000/0x0/0x4ffc00000, data 0x1d9b7b1/0x1eb3000, compress 0x0/0x0/0x0, omap 0x1cc77, meta 0x3d53389), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 45596672 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 187 ms_handle_reset con 0x55a68b297400 session 0x55a68a800540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 187 heartbeat osd_stat(store_statfs(0x4f9fd7000/0x0/0x4ffc00000, data 0x1d9b7b1/0x1eb3000, compress 0x0/0x0/0x0, omap 0x1cc77, meta 0x3d53389), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 45613056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468015 data_alloc: 234881024 data_used: 19348933
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 45613056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112812032 unmapped: 45613056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9fd4000/0x0/0x4ffc00000, data 0x1d9d287/0x1eb6000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 43220992 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113041408 unmapped: 45383680 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9d5f000/0x0/0x4ffc00000, data 0x20132b0/0x212d000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490836 data_alloc: 234881024 data_used: 19348933
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 45359104 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68c36e000 session 0x55a68b660000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68caa9c00 session 0x55a68c8dfa40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 45359104 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68cdb8000 session 0x55a68a8016c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9d5f000/0x0/0x4ffc00000, data 0x20132e9/0x212d000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1489972 data_alloc: 234881024 data_used: 19348933
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9d5f000/0x0/0x4ffc00000, data 0x20132e9/0x212d000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.870173454s of 15.589423180s, submitted: 37
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112672768 unmapped: 45752320 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a800c40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 45719552 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1493414 data_alloc: 234881024 data_used: 19348952
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112705536 unmapped: 45719552 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68c36e000 session 0x55a68d0c7500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9d5d000/0x0/0x4ffc00000, data 0x201335b/0x212f000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,9])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118349824 unmapped: 40075264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 112828416 unmapped: 45596672 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68b297400 session 0x55a68a8001c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68cdb8800 session 0x55a68d0eda40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68d0edc00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113082368 unmapped: 45342720 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0ec1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68b297400 session 0x55a68c7696c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 45326336 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f980f000/0x0/0x4ffc00000, data 0x256234b/0x267d000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68caa9c00 session 0x55a68d0c68c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1528963 data_alloc: 234881024 data_used: 19348952
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 45326336 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113098752 unmapped: 45326336 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 45318144 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68c36e000 session 0x55a68b188700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113106944 unmapped: 45318144 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 45187072 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1530716 data_alloc: 234881024 data_used: 19348952
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113238016 unmapped: 45187072 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f980f000/0x0/0x4ffc00000, data 0x256230c/0x267d000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 113295360 unmapped: 45129728 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f980f000/0x0/0x4ffc00000, data 0x256230c/0x267d000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1563868 data_alloc: 234881024 data_used: 24903128
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1564252 data_alloc: 234881024 data_used: 24915416
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 43089920 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f980f000/0x0/0x4ffc00000, data 0x256230c/0x267d000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.498607635s of 22.997758865s, submitted: 23
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 41959424 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 40763392 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9399000/0x0/0x4ffc00000, data 0x29d830c/0x2af3000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117661696 unmapped: 40763392 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 40607744 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9336000/0x0/0x4ffc00000, data 0x2a3a31c/0x2b56000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68cdb9400 session 0x55a68c769dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1594412 data_alloc: 234881024 data_used: 24919512
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 40886272 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9335000/0x0/0x4ffc00000, data 0x2a3a37e/0x2b57000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 40779776 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 40779776 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9335000/0x0/0x4ffc00000, data 0x2a3a37e/0x2b57000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 40779776 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 40779776 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f9335000/0x0/0x4ffc00000, data 0x2a3a37e/0x2b57000, compress 0x0/0x0/0x0, omap 0x1d04d, meta 0x3d52fb3), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596586 data_alloc: 234881024 data_used: 25116120
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 40779776 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.741673946s of 10.074924469s, submitted: 38
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117645312 unmapped: 40779776 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 29024256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 40394752 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68cdb9400 session 0x55a68c3208c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0ed500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68b297400 session 0x55a68b5328c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 40861696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68c36e000 session 0x55a68a801180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68caa9c00 session 0x55a68c321180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1664221 data_alloc: 234881024 data_used: 25116120
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 40861696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f8848000/0x0/0x4ffc00000, data 0x352737e/0x3644000, compress 0x0/0x0/0x0, omap 0x1d29d, meta 0x3d52d63), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 40861696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 40861696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 40861696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117563392 unmapped: 40861696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68caa9c00 session 0x55a68b660a80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f8847000/0x0/0x4ffc00000, data 0x35273a1/0x3645000, compress 0x0/0x0/0x0, omap 0x1d29d, meta 0x3d52d63), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1665530 data_alloc: 234881024 data_used: 25116120
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 117596160 unmapped: 40828928 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f8847000/0x0/0x4ffc00000, data 0x35273a1/0x3645000, compress 0x0/0x0/0x0, omap 0x1d29d, meta 0x3d52d63), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 29286400 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 130195456 unmapped: 28229632 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f8847000/0x0/0x4ffc00000, data 0x35273a1/0x3645000, compress 0x0/0x0/0x0, omap 0x1d29d, meta 0x3d52d63), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 130228224 unmapped: 28196864 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.752234459s of 12.218365669s, submitted: 24
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0ec540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68b297400 session 0x55a68a801880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 130244608 unmapped: 28180480 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 ms_handle_reset con 0x55a68c36e000 session 0x55a68edd7500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1605816 data_alloc: 234881024 data_used: 25116120
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122789888 unmapped: 35635200 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122789888 unmapped: 35635200 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122789888 unmapped: 35635200 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122789888 unmapped: 35635200 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 heartbeat osd_stat(store_statfs(0x4f932e000/0x0/0x4ffc00000, data 0x2a4037e/0x2b5d000, compress 0x0/0x0/0x0, omap 0x1d215, meta 0x3d52deb), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122789888 unmapped: 35635200 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 189 ms_handle_reset con 0x55a68cdb9400 session 0x55a68ceda000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1609390 data_alloc: 234881024 data_used: 25120181
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122445824 unmapped: 35979264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 190 ms_handle_reset con 0x55a68cdb9400 session 0x55a689e05180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122454016 unmapped: 35971072 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135831552 unmapped: 22593536 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 ms_handle_reset con 0x55a68b297400 session 0x55a68d0c61c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 ms_handle_reset con 0x55a689eb2c00 session 0x55a689e05c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 ms_handle_reset con 0x55a68c36e000 session 0x55a68ce99880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 ms_handle_reset con 0x55a68caa9c00 session 0x55a68ce99c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 ms_handle_reset con 0x55a689eb2c00 session 0x55a68edd61c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 ms_handle_reset con 0x55a68b297400 session 0x55a68edd6540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 ms_handle_reset con 0x55a68c36e000 session 0x55a68edd6e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123109376 unmapped: 35315712 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 heartbeat osd_stat(store_statfs(0x4f8796000/0x0/0x4ffc00000, data 0x35d1757/0x36f2000, compress 0x0/0x0/0x0, omap 0x1d98b, meta 0x3d52675), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.394016266s of 10.877507210s, submitted: 57
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 192 ms_handle_reset con 0x55a68cdb9400 session 0x55a68edd76c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123158528 unmapped: 35266560 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1686670 data_alloc: 234881024 data_used: 25120279
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123158528 unmapped: 35266560 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 192 ms_handle_reset con 0x55a68cdb9800 session 0x55a68edd7c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 35536896 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 193 ms_handle_reset con 0x55a689eb2c00 session 0x55a68ce981c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 193 ms_handle_reset con 0x55a68b297400 session 0x55a689e05a40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 193 ms_handle_reset con 0x55a68c36e000 session 0x55a68f8af180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123355136 unmapped: 35069952 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 ms_handle_reset con 0x55a68cdb9400 session 0x55a68b599180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f876e000/0x0/0x4ffc00000, data 0x35f8ff5/0x371c000, compress 0x0/0x0/0x0, omap 0x1de3d, meta 0x3d521c3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 heartbeat osd_stat(store_statfs(0x4f876e000/0x0/0x4ffc00000, data 0x35f8ff5/0x371c000, compress 0x0/0x0/0x0, omap 0x1de3d, meta 0x3d521c3), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 126803968 unmapped: 31621120 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129540096 unmapped: 28884992 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 ms_handle_reset con 0x55a68cdb8800 session 0x55a68d0c7dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 ms_handle_reset con 0x55a68cdb9000 session 0x55a68c769a40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 ms_handle_reset con 0x55a68cdb9c00 session 0x55a68a9328c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 ms_handle_reset con 0x55a68c36e400 session 0x55a68a3fa540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1764109 data_alloc: 251658240 data_used: 36013082
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129564672 unmapped: 28860416 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 195 ms_handle_reset con 0x55a689eb2c00 session 0x55a68edd6fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 39337984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 195 ms_handle_reset con 0x55a68c36e000 session 0x55a690504fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 195 heartbeat osd_stat(store_statfs(0x4f9d21000/0x0/0x4ffc00000, data 0x204372e/0x2169000, compress 0x0/0x0/0x0, omap 0x1e3df, meta 0x3d51c21), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 39337984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 39337984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 39337984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 195 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b6601c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1539533 data_alloc: 234881024 data_used: 19353642
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 39337984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.201879501s of 11.165215492s, submitted: 93
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127565824 unmapped: 30859264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 196 ms_handle_reset con 0x55a68c36e400 session 0x55a68a800e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 39288832 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 196 heartbeat osd_stat(store_statfs(0x4f7da8000/0x0/0x4ffc00000, data 0x3dab18f/0x3ed1000, compress 0x0/0x0/0x0, omap 0x1ea55, meta 0x3d515ab), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 39510016 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 39510016 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 197 ms_handle_reset con 0x55a68cdb9c00 session 0x55a68b1888c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1863017 data_alloc: 234881024 data_used: 19361701
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 39559168 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 39510016 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 199 ms_handle_reset con 0x55a68cdb9000 session 0x55a68b26ca80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 39559168 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 199 heartbeat osd_stat(store_statfs(0x4f4faf000/0x0/0x4ffc00000, data 0x6db063a/0x6edb000, compress 0x0/0x0/0x0, omap 0x1f1ed, meta 0x3d50e13), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 39559168 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 199 heartbeat osd_stat(store_statfs(0x4f3faf000/0x0/0x4ffc00000, data 0x7db063a/0x7edb000, compress 0x0/0x0/0x0, omap 0x1f1ed, meta 0x3d50e13), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118824960 unmapped: 39600128 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2123487 data_alloc: 234881024 data_used: 19362899
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127164416 unmapped: 31260672 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.251257896s of 10.038728714s, submitted: 63
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118726656 unmapped: 39698432 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f1fac000/0x0/0x4ffc00000, data 0x9db2110/0x9ede000, compress 0x0/0x0/0x0, omap 0x1f42b, meta 0x3d50bd5), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f1fac000/0x0/0x4ffc00000, data 0x9db2110/0x9ede000, compress 0x0/0x0/0x0, omap 0x1f42b, meta 0x3d50bd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118734848 unmapped: 39690240 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f07ac000/0x0/0x4ffc00000, data 0xb5b2110/0xb6de000, compress 0x0/0x0/0x0, omap 0x1f42b, meta 0x3d50bd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 39772160 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 39772160 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2679899 data_alloc: 234881024 data_used: 19362899
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118677504 unmapped: 39747584 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 39829504 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68b297400 session 0x55a68ce988c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b188380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68b297400 session 0x55a68c321dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68c36e400 session 0x55a68b532e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 39788544 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68cdb9000 session 0x55a68cdf2380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68cdb9c00 session 0x55a68c320fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a689eb2c00 session 0x55a68f93cc40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68b297400 session 0x55a68d0ec8c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68c36e400 session 0x55a68cdfe700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 38445056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 heartbeat osd_stat(store_statfs(0x4ea190000/0x0/0x4ffc00000, data 0x11bd0110/0x11cfc000, compress 0x0/0x0/0x0, omap 0x1f42b, meta 0x3d50bd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 38371328 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951077 data_alloc: 234881024 data_used: 19362899
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 38363136 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.242252350s of 10.904802322s, submitted: 28
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 38322176 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 29769728 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 38035456 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 heartbeat osd_stat(store_statfs(0x4e5990000/0x0/0x4ffc00000, data 0x163d0110/0x164fc000, compress 0x0/0x0/0x0, omap 0x1f42b, meta 0x3d50bd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 ms_handle_reset con 0x55a68cdb9000 session 0x55a68f93c380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 29581312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3369110 data_alloc: 234881024 data_used: 19363411
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 37593088 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 37011456 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 heartbeat osd_stat(store_statfs(0x4e3990000/0x0/0x4ffc00000, data 0x183d0110/0x184fc000, compress 0x0/0x0/0x0, omap 0x1fa37, meta 0x3d505c9), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 201 ms_handle_reset con 0x55a68c36ec00 session 0x55a68f93c8c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121438208 unmapped: 36986880 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121438208 unmapped: 36986880 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 36962304 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 202 ms_handle_reset con 0x55a689eb2c00 session 0x55a68f93d6c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 202 ms_handle_reset con 0x55a68b297400 session 0x55a6905048c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 202 ms_handle_reset con 0x55a68c36e400 session 0x55a68f93ddc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1674584 data_alloc: 234881024 data_used: 24651404
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 37003264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f998b000/0x0/0x4ffc00000, data 0x23d38c8/0x24ff000, compress 0x0/0x0/0x0, omap 0x20063, meta 0x3d4ff9d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 37003264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 37003264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 37003264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 37003264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.863451004s of 13.878046036s, submitted: 88
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1677358 data_alloc: 234881024 data_used: 24651404
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 35962880 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 124305408 unmapped: 34119680 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f9507000/0x0/0x4ffc00000, data 0x285639e/0x2983000, compress 0x0/0x0/0x0, omap 0x2032b, meta 0x3d4fcd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1713006 data_alloc: 234881024 data_used: 25396876
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f94fe000/0x0/0x4ffc00000, data 0x286039e/0x298d000, compress 0x0/0x0/0x0, omap 0x2032b, meta 0x3d4fcd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f94fe000/0x0/0x4ffc00000, data 0x286039e/0x298d000, compress 0x0/0x0/0x0, omap 0x2032b, meta 0x3d4fcd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.676471710s of 10.001405716s, submitted: 48
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68cdb9000 session 0x55a68edd6000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1714712 data_alloc: 234881024 data_used: 25396876
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f94fd000/0x0/0x4ffc00000, data 0x2860400/0x298e000, compress 0x0/0x0/0x0, omap 0x2032b, meta 0x3d4fcd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68d329400 session 0x55a690505500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0ec380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68b297400 session 0x55a690092000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1714712 data_alloc: 234881024 data_used: 25396876
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68c36e400 session 0x55a68fdf7340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123084800 unmapped: 35340288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135290880 unmapped: 23134208 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68cdb9000 session 0x55a68fdf6fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68d32fc00 session 0x55a68d0ec700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cedbdc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68b297400 session 0x55a68a822000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68c36e400 session 0x55a68fdf6700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 35274752 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 35274752 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f8986000/0x0/0x4ffc00000, data 0x33d8400/0x3506000, compress 0x0/0x0/0x0, omap 0x204b3, meta 0x3d4fb4d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f8986000/0x0/0x4ffc00000, data 0x33d8400/0x3506000, compress 0x0/0x0/0x0, omap 0x204b3, meta 0x3d4fb4d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 35274752 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1782939 data_alloc: 234881024 data_used: 25396876
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 35274752 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68cdb9000 session 0x55a68edd6c40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123150336 unmapped: 35274752 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f8986000/0x0/0x4ffc00000, data 0x33d8400/0x3506000, compress 0x0/0x0/0x0, omap 0x204b3, meta 0x3d4fb4d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68d32f800 session 0x55a68fdf76c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123281408 unmapped: 35143680 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a689eb2c00 session 0x55a690093180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f8986000/0x0/0x4ffc00000, data 0x33d8400/0x3506000, compress 0x0/0x0/0x0, omap 0x204b3, meta 0x3d4fb4d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.111794472s of 12.502414703s, submitted: 22
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 ms_handle_reset con 0x55a68b297400 session 0x55a690092c40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123658240 unmapped: 34766848 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 34971648 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858711 data_alloc: 251658240 data_used: 34399884
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 26976256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 26976256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 26976256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 heartbeat osd_stat(store_statfs(0x4f895b000/0x0/0x4ffc00000, data 0x3402410/0x3531000, compress 0x0/0x0/0x0, omap 0x204b3, meta 0x3d4fb4d), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 26976256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 203 handle_osd_map epochs [203,204], i have 204, src has [1,204]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 26976256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1862205 data_alloc: 251658240 data_used: 34399884
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131448832 unmapped: 26976256 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 204 ms_handle_reset con 0x55a68d32f400 session 0x55a690504380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 205 ms_handle_reset con 0x55a68d32f000 session 0x55a68edd7180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131481600 unmapped: 26943488 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f8950000/0x0/0x4ffc00000, data 0x3405c06/0x3538000, compress 0x0/0x0/0x0, omap 0x20a77, meta 0x3d4f589), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 206 ms_handle_reset con 0x55a68d32ec00 session 0x55a68ce996c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 206 ms_handle_reset con 0x55a689eb2c00 session 0x55a6900928c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 26722304 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 206 ms_handle_reset con 0x55a68b297400 session 0x55a68edd6700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.991404533s of 10.332911491s, submitted: 19
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f8950000/0x0/0x4ffc00000, data 0x340783d/0x353a000, compress 0x0/0x0/0x0, omap 0x20c65, meta 0x3d4f39b), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 26722304 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 206 ms_handle_reset con 0x55a68d32f000 session 0x55a68cdf36c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 25346048 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895529 data_alloc: 251658240 data_used: 34422997
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 21872640 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 207 ms_handle_reset con 0x55a68d32f400 session 0x55a68c769c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137494528 unmapped: 20930560 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f832d000/0x0/0x4ffc00000, data 0x3a1e432/0x3b51000, compress 0x0/0x0/0x0, omap 0x20f4b, meta 0x3d4f0b5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f832d000/0x0/0x4ffc00000, data 0x3a1e432/0x3b51000, compress 0x0/0x0/0x0, omap 0x20f4b, meta 0x3d4f0b5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137641984 unmapped: 20783104 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 21995520 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f8321000/0x0/0x4ffc00000, data 0x3a2f079/0x3b63000, compress 0x0/0x0/0x0, omap 0x2113a, meta 0x3d4eec6), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 208 ms_handle_reset con 0x55a68d320000 session 0x55a68f93d340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 208 ms_handle_reset con 0x55a68d321c00 session 0x55a68ceda540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 21766144 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1833119 data_alloc: 251658240 data_used: 32321749
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135872512 unmapped: 22552576 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 209 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b6616c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135905280 unmapped: 22519808 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135905280 unmapped: 22519808 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135905280 unmapped: 22519808 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.605309486s of 11.502936363s, submitted: 119
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 22511616 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 209 heartbeat osd_stat(store_statfs(0x4f8dcf000/0x0/0x4ffc00000, data 0x2f87b8b/0x30bd000, compress 0x0/0x0/0x0, omap 0x21687, meta 0x3d4e979), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1831969 data_alloc: 251658240 data_used: 32322037
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135929856 unmapped: 22495232 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f8dca000/0x0/0x4ffc00000, data 0x2f89661/0x30c0000, compress 0x0/0x0/0x0, omap 0x219d8, meta 0x3d4e628), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 210 ms_handle_reset con 0x55a68b297400 session 0x55a68a8008c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 22462464 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 22462464 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 ms_handle_reset con 0x55a68d32f000 session 0x55a68f8ae1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137035776 unmapped: 21389312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f8dc5000/0x0/0x4ffc00000, data 0x2f8b2c6/0x30c5000, compress 0x0/0x0/0x0, omap 0x21ec9, meta 0x3d4e137), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.3 total, 600.0 interval#012Cumulative writes: 8694 writes, 39K keys, 8694 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 8694 writes, 2022 syncs, 4.30 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2925 writes, 15K keys, 2925 commit groups, 1.0 writes per commit group, ingest: 7.55 MB, 0.01 MB/s#012Interval WAL: 2925 writes, 1148 syncs, 2.55 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137068544 unmapped: 21356544 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f8dc5000/0x0/0x4ffc00000, data 0x2f8b2c6/0x30c5000, compress 0x0/0x0/0x0, omap 0x21ec9, meta 0x3d4e137), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 ms_handle_reset con 0x55a68d32f400 session 0x55a690504540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1839632 data_alloc: 251658240 data_used: 32322053
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 ms_handle_reset con 0x55a689eb2c00 session 0x55a68c3216c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137076736 unmapped: 21348352 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 ms_handle_reset con 0x55a68b297400 session 0x55a68b532a80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137101312 unmapped: 21323776 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 ms_handle_reset con 0x55a68d321c00 session 0x55a68cdf3340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f8dc5000/0x0/0x4ffc00000, data 0x2f8b2c6/0x30c5000, compress 0x0/0x0/0x0, omap 0x22009, meta 0x3d4dff7), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 ms_handle_reset con 0x55a68d32f000 session 0x55a68b182c40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137125888 unmapped: 21299200 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 212 ms_handle_reset con 0x55a68d32e800 session 0x55a68fcfae00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 21282816 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 212 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf2a80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 212 ms_handle_reset con 0x55a68b297400 session 0x55a68b188700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 212 heartbeat osd_stat(store_statfs(0x4f8dc4000/0x0/0x4ffc00000, data 0x2f8ce9b/0x30c6000, compress 0x0/0x0/0x0, omap 0x22552, meta 0x3d4daae), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 21241856 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.667724609s of 10.825621605s, submitted: 100
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 212 ms_handle_reset con 0x55a68d321c00 session 0x55a68ce99500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1843590 data_alloc: 251658240 data_used: 32322037
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 21241856 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 21241856 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137183232 unmapped: 21241856 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 213 ms_handle_reset con 0x55a68d32f000 session 0x55a690504700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f8dc5000/0x0/0x4ffc00000, data 0x2f8ceab/0x30c7000, compress 0x0/0x0/0x0, omap 0x22674, meta 0x3d4d98c), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137191424 unmapped: 21233664 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137191424 unmapped: 21233664 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 213 ms_handle_reset con 0x55a68b20dc00 session 0x55a68f93dc00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 213 ms_handle_reset con 0x55a68d32e400 session 0x55a690505dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1851025 data_alloc: 251658240 data_used: 32326149
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137199616 unmapped: 21225472 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 214 ms_handle_reset con 0x55a68b297400 session 0x55a68f93c1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 214 ms_handle_reset con 0x55a68d321c00 session 0x55a68fcfb180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137207808 unmapped: 21217280 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 214 ms_handle_reset con 0x55a68d32f000 session 0x55a68fcfb6c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 215 ms_handle_reset con 0x55a68d52b000 session 0x55a68a933880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 215 ms_handle_reset con 0x55a68d321c00 session 0x55a68f8ae000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 215 heartbeat osd_stat(store_statfs(0x4f8db9000/0x0/0x4ffc00000, data 0x2f92193/0x30d1000, compress 0x0/0x0/0x0, omap 0x23391, meta 0x3d4cc6f), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137388032 unmapped: 21037056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 216 ms_handle_reset con 0x55a68d32e400 session 0x55a68b1881c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 216 ms_handle_reset con 0x55a68b297400 session 0x55a68d0ed180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 216 ms_handle_reset con 0x55a68d52bc00 session 0x55a68b63d6c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136970240 unmapped: 21454848 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 216 ms_handle_reset con 0x55a68d32f000 session 0x55a68b63d500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 216 ms_handle_reset con 0x55a68d32f000 session 0x55a68ce98380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137003008 unmapped: 21422080 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.625836372s of 10.226813316s, submitted: 68
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1859418 data_alloc: 251658240 data_used: 32326522
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137003008 unmapped: 21422080 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 ms_handle_reset con 0x55a68b297400 session 0x55a68c320e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 21405696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 ms_handle_reset con 0x55a689eb2c00 session 0x55a68fcfa380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 heartbeat osd_stat(store_statfs(0x4f8db5000/0x0/0x4ffc00000, data 0x2f95b72/0x30d5000, compress 0x0/0x0/0x0, omap 0x23d65, meta 0x3d4c29b), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 21405696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 ms_handle_reset con 0x55a68c36e400 session 0x55a68c8dfc00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 ms_handle_reset con 0x55a68cdb9000 session 0x55a68f93d500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 31129600 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf3500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 31129600 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1671553 data_alloc: 234881024 data_used: 19365684
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 31129600 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 31129600 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 heartbeat osd_stat(store_statfs(0x4f9f7e000/0x0/0x4ffc00000, data 0x1dcfb62/0x1f0e000, compress 0x0/0x0/0x0, omap 0x23f30, meta 0x3d4c0d0), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 217 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 31129600 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127295488 unmapped: 31129600 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 218 ms_handle_reset con 0x55a68b297400 session 0x55a68b533500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127303680 unmapped: 31121408 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 219 ms_handle_reset con 0x55a68d32e400 session 0x55a68cdf2000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f9f75000/0x0/0x4ffc00000, data 0x1dd3319/0x1f15000, compress 0x0/0x0/0x0, omap 0x24444, meta 0x3d4bbbc), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.068937302s of 10.007222176s, submitted: 82
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1685938 data_alloc: 234881024 data_used: 19365684
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 219 ms_handle_reset con 0x55a68d52b000 session 0x55a68a3fafc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 219 ms_handle_reset con 0x55a68d52a400 session 0x55a68d0c6380
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 31105024 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 220 ms_handle_reset con 0x55a68d52bc00 session 0x55a68cdf2540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 220 ms_handle_reset con 0x55a689eb2c00 session 0x55a690093180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127344640 unmapped: 31080448 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 221 ms_handle_reset con 0x55a68d52a400 session 0x55a68d0ed880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 221 ms_handle_reset con 0x55a68b297400 session 0x55a68fcfafc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127361024 unmapped: 31064064 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 221 heartbeat osd_stat(store_statfs(0x4f9f6f000/0x0/0x4ffc00000, data 0x1dd6c3a/0x1f1b000, compress 0x0/0x0/0x0, omap 0x24b96, meta 0x3d4b46a), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a68d32e400 session 0x55a68b63cc40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdff180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127401984 unmapped: 31023104 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a68b297400 session 0x55a68f8ae1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a68d52b000 session 0x55a68fcfa8c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 29589504 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a68d447000 session 0x55a68b188540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a68d52a400 session 0x55a690092c40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a689eb2c00 session 0x55a690505180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 ms_handle_reset con 0x55a68d447000 session 0x55a68d0c6fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1699931 data_alloc: 234881024 data_used: 19366898
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 223 ms_handle_reset con 0x55a68d52b000 session 0x55a68b63cfc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 223 ms_handle_reset con 0x55a68b297400 session 0x55a68d0eda40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 223 ms_handle_reset con 0x55a68d447400 session 0x55a68f93ca80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128868352 unmapped: 29556736 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 223 ms_handle_reset con 0x55a68d52bc00 session 0x55a68b63ddc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 223 ms_handle_reset con 0x55a68b297400 session 0x55a68a801c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128868352 unmapped: 29556736 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 223 ms_handle_reset con 0x55a68d447000 session 0x55a68edd7880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 224 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a933180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128884736 unmapped: 29540352 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 225 heartbeat osd_stat(store_statfs(0x4f9f63000/0x0/0x4ffc00000, data 0x1ddc11f/0x1f25000, compress 0x0/0x0/0x0, omap 0x2551a, meta 0x3d4aae6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128884736 unmapped: 29540352 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 225 ms_handle_reset con 0x55a68d52b000 session 0x55a68a822e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128892928 unmapped: 29532160 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 226 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b189dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 226 ms_handle_reset con 0x55a68b297400 session 0x55a68c769dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.024145126s of 10.002042770s, submitted: 119
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1708388 data_alloc: 234881024 data_used: 19367483
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128925696 unmapped: 29499392 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 132489216 unmapped: 25935872 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 227 ms_handle_reset con 0x55a68d52bc00 session 0x55a68d0c76c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 228 ms_handle_reset con 0x55a68d446c00 session 0x55a68b189180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 228 ms_handle_reset con 0x55a68d446800 session 0x55a68d0c7500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 228 ms_handle_reset con 0x55a68d447000 session 0x55a68b5336c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127844352 unmapped: 30580736 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 228 heartbeat osd_stat(store_statfs(0x4f96bc000/0x0/0x4ffc00000, data 0x268014a/0x27cc000, compress 0x0/0x0/0x0, omap 0x2722b, meta 0x3d48dd5), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127582208 unmapped: 30842880 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 228 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a822fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 228 ms_handle_reset con 0x55a68b297400 session 0x55a68a8016c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127213568 unmapped: 31211520 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 229 ms_handle_reset con 0x55a68d446c00 session 0x55a68cedbc00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1774085 data_alloc: 234881024 data_used: 19367982
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127221760 unmapped: 31203328 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 231 ms_handle_reset con 0x55a68d52bc00 session 0x55a690092e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127098880 unmapped: 31326208 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 231 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0c6a80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 231 ms_handle_reset con 0x55a68d447c00 session 0x55a68b660700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 232 ms_handle_reset con 0x55a68b297400 session 0x55a68c28c1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 232 ms_handle_reset con 0x55a68d446800 session 0x55a68cdf2e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 31277056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 232 heartbeat osd_stat(store_statfs(0x4f9122000/0x0/0x4ffc00000, data 0x2c144f0/0x2d64000, compress 0x0/0x0/0x0, omap 0x28249, meta 0x3d47db7), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127148032 unmapped: 31277056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 232 ms_handle_reset con 0x55a68d446c00 session 0x55a68b5996c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 232 ms_handle_reset con 0x55a68b297400 session 0x55a68edd7dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 31244288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 232 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 233 ms_handle_reset con 0x55a68d446800 session 0x55a68fdf7c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 233 ms_handle_reset con 0x55a68d446400 session 0x55a68a822540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 233 ms_handle_reset con 0x55a68d52bc00 session 0x55a68c28d6c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1825604 data_alloc: 234881024 data_used: 19368014
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127729664 unmapped: 30695424 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.711951256s of 10.551823616s, submitted: 137
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127664128 unmapped: 30760960 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 234 ms_handle_reset con 0x55a68ce8f000 session 0x55a68fcfb880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 234 ms_handle_reset con 0x55a68d447c00 session 0x55a68c320700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127713280 unmapped: 30711808 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 234 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf2c40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f89fc000/0x0/0x4ffc00000, data 0x33370de/0x348c000, compress 0x0/0x0/0x0, omap 0x22577, meta 0x3d4da89), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127713280 unmapped: 30711808 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f89fc000/0x0/0x4ffc00000, data 0x33370de/0x348c000, compress 0x0/0x0/0x0, omap 0x22577, meta 0x3d4da89), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f89fa000/0x0/0x4ffc00000, data 0x3338d33/0x3490000, compress 0x0/0x0/0x0, omap 0x22577, meta 0x3d4da89), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 235 ms_handle_reset con 0x55a68b297400 session 0x55a68b183a40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 235 ms_handle_reset con 0x55a68d446000 session 0x55a68a3fbdc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 31186944 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 235 ms_handle_reset con 0x55a68d446800 session 0x55a68a933500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 235 ms_handle_reset con 0x55a68d446800 session 0x55a690504e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 235 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 236 ms_handle_reset con 0x55a68d447000 session 0x55a68fdf7880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1879084 data_alloc: 234881024 data_used: 19369614
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 31186944 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 31162368 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 31145984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 236 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 31145984 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 237 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf2fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 237 heartbeat osd_stat(store_statfs(0x4f89f5000/0x0/0x4ffc00000, data 0x333c724/0x3495000, compress 0x0/0x0/0x0, omap 0x22027, meta 0x3d4dfd9), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127311872 unmapped: 31113216 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 237 ms_handle_reset con 0x55a68d446400 session 0x55a68b63d340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1839546 data_alloc: 234881024 data_used: 19369598
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127361024 unmapped: 31064064 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.557184219s of 10.018120766s, submitted: 135
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 237 handle_osd_map epochs [237,238], i have 238, src has [1,238]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 238 ms_handle_reset con 0x55a68b297400 session 0x55a68fcfae00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 30998528 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 239 ms_handle_reset con 0x55a68d446000 session 0x55a68a8001c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 239 ms_handle_reset con 0x55a68b297400 session 0x55a690505880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 239 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdffdc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 239 ms_handle_reset con 0x55a68d446400 session 0x55a68b533c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127639552 unmapped: 30785536 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127647744 unmapped: 30777344 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 240 ms_handle_reset con 0x55a68d446800 session 0x55a68d0ed500
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 240 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0c7340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 30769152 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 240 heartbeat osd_stat(store_statfs(0x4f969e000/0x0/0x4ffc00000, data 0x26939d2/0x27ec000, compress 0x0/0x0/0x0, omap 0x20a94, meta 0x3d4f56c), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 240 ms_handle_reset con 0x55a68b297400 session 0x55a68cedb180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1767264 data_alloc: 234881024 data_used: 19367982
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 30769152 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 30769152 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 30769152 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 241 ms_handle_reset con 0x55a68d446400 session 0x55a688dd3180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0x1dfa1fd/0x1f57000, compress 0x0/0x0/0x0, omap 0x21000, meta 0x3d4f000), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 30769152 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 241 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0x1dfa1fd/0x1f57000, compress 0x0/0x0/0x0, omap 0x21000, meta 0x3d4f000), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 242 ms_handle_reset con 0x55a68d447000 session 0x55a68cdf3dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127655936 unmapped: 30769152 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 242 heartbeat osd_stat(store_statfs(0x4f9f35000/0x0/0x4ffc00000, data 0x1dfa1fd/0x1f57000, compress 0x0/0x0/0x0, omap 0x21000, meta 0x3d4f000), peers [0,1] op hist [0,0,1,0,0,1])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 242 ms_handle_reset con 0x55a68d52bc00 session 0x55a690093a40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1778897 data_alloc: 234881024 data_used: 19367998
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 30720000 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 243 ms_handle_reset con 0x55a68d447c00 session 0x55a68c8dea80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.446841240s of 10.017427444s, submitted: 206
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 243 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b661dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 ms_handle_reset con 0x55a68d446000 session 0x55a68c7696c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 29622272 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f9f29000/0x0/0x4ffc00000, data 0x1dffaad/0x1f61000, compress 0x0/0x0/0x0, omap 0x20a74, meta 0x3d4f58c), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 29622272 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 ms_handle_reset con 0x55a68b297400 session 0x55a68fcfa540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128802816 unmapped: 29622272 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 ms_handle_reset con 0x55a68d446400 session 0x55a68cdf21c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 ms_handle_reset con 0x55a68b297400 session 0x55a68a3fb880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b532fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128811008 unmapped: 29614080 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 ms_handle_reset con 0x55a68d446400 session 0x55a68c8dfdc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1786407 data_alloc: 234881024 data_used: 19369168
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128811008 unmapped: 29614080 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 ms_handle_reset con 0x55a68ce8e000 session 0x55a68ce98000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 ms_handle_reset con 0x55a68ce8ec00 session 0x55a68f8aea80
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128827392 unmapped: 29597696 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 ms_handle_reset con 0x55a68d446000 session 0x55a68fdf6000
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f9f26000/0x0/0x4ffc00000, data 0x1e01764/0x1f66000, compress 0x0/0x0/0x0, omap 0x20b84, meta 0x3d4f47c), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 29589504 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 heartbeat osd_stat(store_statfs(0x4f9f26000/0x0/0x4ffc00000, data 0x1e01764/0x1f66000, compress 0x0/0x0/0x0, omap 0x20b84, meta 0x3d4f47c), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 29589504 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 ms_handle_reset con 0x55a68d447c00 session 0x55a689e05340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 ms_handle_reset con 0x55a68d447000 session 0x55a690092540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 29589504 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1790585 data_alloc: 234881024 data_used: 19371109
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 29581312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 29581312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f9f23000/0x0/0x4ffc00000, data 0x1e032e7/0x1f67000, compress 0x0/0x0/0x0, omap 0x20b84, meta 0x3d4f47c), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 29581312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f9f23000/0x0/0x4ffc00000, data 0x1e032e7/0x1f67000, compress 0x0/0x0/0x0, omap 0x20b84, meta 0x3d4f47c), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 246 ms_handle_reset con 0x55a689eb2c00 session 0x55a68fdf7a40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 246 ms_handle_reset con 0x55a68b297400 session 0x55a68b661880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 29581312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.673930168s of 13.273561478s, submitted: 61
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 29581312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 246 ms_handle_reset con 0x55a68d447000 session 0x55a68f8aec40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 247 ms_handle_reset con 0x55a68d446000 session 0x55a68fdf6700
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 247 ms_handle_reset con 0x55a68ce8e000 session 0x55a68b26d340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 247 ms_handle_reset con 0x55a68d446400 session 0x55a68d0edc00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1874565 data_alloc: 234881024 data_used: 19371207
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 247 ms_handle_reset con 0x55a68ce8e800 session 0x55a68a823180
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 247 ms_handle_reset con 0x55a68ce8e000 session 0x55a68b63c540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129122304 unmapped: 29302784 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 247 ms_handle_reset con 0x55a68d447c00 session 0x55a68ce99dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 248 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdff880
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 248 ms_handle_reset con 0x55a68d446400 session 0x55a68a800fc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f913b000/0x0/0x4ffc00000, data 0x2be8a3d/0x2d4f000, compress 0x0/0x0/0x0, omap 0x1fc55, meta 0x3d503ab), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 29360128 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 ms_handle_reset con 0x55a68d447000 session 0x55a68b26ddc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 ms_handle_reset con 0x55a68d446000 session 0x55a68c8dfa40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a822c40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 28311552 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 ms_handle_reset con 0x55a68ce8e000 session 0x55a68fcfb340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 28311552 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 28311552 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1882504 data_alloc: 234881024 data_used: 19369737
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 130113536 unmapped: 28311552 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 ms_handle_reset con 0x55a68d446400 session 0x55a68c7d7c00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 heartbeat osd_stat(store_statfs(0x4f9137000/0x0/0x4ffc00000, data 0x2bea154/0x2d51000, compress 0x0/0x0/0x0, omap 0x1fd65, meta 0x3d5029b), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 249 handle_osd_map epochs [249,250], i have 250, src has [1,250]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 ms_handle_reset con 0x55a68d447c00 session 0x55a68f93cfc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 129990656 unmapped: 28434432 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f9134000/0x0/0x4ffc00000, data 0x2bebc79/0x2d56000, compress 0x0/0x0/0x0, omap 0x1fdcf, meta 0x3d50231), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 22061056 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f9134000/0x0/0x4ffc00000, data 0x2bebc79/0x2d56000, compress 0x0/0x0/0x0, omap 0x1fdcf, meta 0x3d50231), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 ms_handle_reset con 0x55a68d446000 session 0x55a68b188e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 22028288 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f9134000/0x0/0x4ffc00000, data 0x2bebc79/0x2d56000, compress 0x0/0x0/0x0, omap 0x1fdcf, meta 0x3d50231), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.209948540s of 10.511286736s, submitted: 102
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 250 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136445952 unmapped: 21979136 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 251 ms_handle_reset con 0x55a68d446400 session 0x55a689e04540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 251 ms_handle_reset con 0x55a68ce8e400 session 0x55a68f8af340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68d0ec540
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 ms_handle_reset con 0x55a68dbd0000 session 0x55a68b599dc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 ms_handle_reset con 0x55a68d447c00 session 0x55a690505340
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1976789 data_alloc: 251658240 data_used: 33732873
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 ms_handle_reset con 0x55a68ce8e400 session 0x55a68b63c1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136200192 unmapped: 22224896 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 heartbeat osd_stat(store_statfs(0x4f912f000/0x0/0x4ffc00000, data 0x2bed888/0x2d59000, compress 0x0/0x0/0x0, omap 0x1fedf, meta 0x3d50121), peers [0,1] op hist [])
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 ms_handle_reset con 0x55a68d446000 session 0x55a68d0c6e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 ms_handle_reset con 0x55a68d446400 session 0x55a68c8defc0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68dbd0400 session 0x55a68a3fa1c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68dbd0c00 session 0x55a689e04e00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68ce8e400 session 0x55a68cedba40
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68d446000 session 0x55a68fdf76c0
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68d446400 session 0x55a68b26dc00
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 21872640 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:25 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 21872640 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 253 heartbeat osd_stat(store_statfs(0x4f8b55000/0x0/0x4ffc00000, data 0x31c4fd1/0x3333000, compress 0x0/0x0/0x0, omap 0x20059, meta 0x3d4ffa7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68d447c00 session 0x55a68a822a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 21872640 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68ce8e400 session 0x55a68ce98540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 21872640 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68d446000 session 0x55a690505c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 253 ms_handle_reset con 0x55a68d446400 session 0x55a689e04a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2018015 data_alloc: 251658240 data_used: 33733458
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 21872640 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142974976 unmapped: 15450112 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f8b53000/0x0/0x4ffc00000, data 0x31c6ab7/0x3337000, compress 0x0/0x0/0x0, omap 0x201d3, meta 0x3d4fe2d), peers [0,1] op hist [0,0,0,0,0,1,5])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148045824 unmapped: 10379264 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148586496 unmapped: 9838592 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 147972096 unmapped: 10452992 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2118855 data_alloc: 251658240 data_used: 40900946
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 147972096 unmapped: 10452992 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f81fb000/0x0/0x4ffc00000, data 0x3b20ab7/0x3c91000, compress 0x0/0x0/0x0, omap 0x201d3, meta 0x3d4fe2d), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148004864 unmapped: 10420224 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148004864 unmapped: 10420224 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148004864 unmapped: 10420224 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.708168983s of 15.052724838s, submitted: 131
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 254 ms_handle_reset con 0x55a68dbd1800 session 0x55a68fcfbdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148004864 unmapped: 10420224 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f81d8000/0x0/0x4ffc00000, data 0x3b43ab7/0x3cb4000, compress 0x0/0x0/0x0, omap 0x20330, meta 0x3d4fcd0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 254 handle_osd_map epochs [254,255], i have 255, src has [1,255]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2122453 data_alloc: 251658240 data_used: 40909138
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148299776 unmapped: 10125312 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 255 ms_handle_reset con 0x55a68dbd0800 session 0x55a68cdff180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 255 ms_handle_reset con 0x55a68dbd1000 session 0x55a68a3fba40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 256 ms_handle_reset con 0x55a68dbd0800 session 0x55a68b26ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148316160 unmapped: 10108928 heap: 158425088 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151142400 unmapped: 8339456 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 256 heartbeat osd_stat(store_statfs(0x4f798c000/0x0/0x4ffc00000, data 0x4388791/0x44fc000, compress 0x0/0x0/0x0, omap 0x20440, meta 0x3d4fbc0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150888448 unmapped: 8593408 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 257 ms_handle_reset con 0x55a68ce8e400 session 0x55a68f8aefc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 257 ms_handle_reset con 0x55a68d446000 session 0x55a68c28c540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150962176 unmapped: 8519680 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 257 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x43aeee4/0x4523000, compress 0x0/0x0/0x0, omap 0x205de, meta 0x3d4fa22), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2182567 data_alloc: 251658240 data_used: 41310546
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150994944 unmapped: 8486912 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x43aeee4/0x4523000, compress 0x0/0x0/0x0, omap 0x205de, meta 0x3d4fa22), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151011328 unmapped: 8470528 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 258 ms_handle_reset con 0x55a68d446400 session 0x55a689e041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151011328 unmapped: 8470528 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 258 ms_handle_reset con 0x55a689eb2c00 session 0x55a689e056c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 258 ms_handle_reset con 0x55a68ce8e000 session 0x55a68a801340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151019520 unmapped: 8462336 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151035904 unmapped: 8445952 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.713325500s of 10.158923149s, submitted: 98
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x43b0b47/0x4526000, compress 0x0/0x0/0x0, omap 0x205de, meta 0x3d4fa22), peers [0,1] op hist [0,0,0,0,0,0,1,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1965665 data_alloc: 234881024 data_used: 25891767
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 17924096 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141565952 unmapped: 17915904 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 259 ms_handle_reset con 0x55a68ce8e400 session 0x55a689e04700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 259 heartbeat osd_stat(store_statfs(0x4f90bf000/0x0/0x4ffc00000, data 0x2c56616/0x2dcc000, compress 0x0/0x0/0x0, omap 0x1ff90, meta 0x3d50070), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141565952 unmapped: 17915904 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 259 heartbeat osd_stat(store_statfs(0x4f90bf000/0x0/0x4ffc00000, data 0x2c56616/0x2dcc000, compress 0x0/0x0/0x0, omap 0x1ff90, meta 0x3d50070), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141565952 unmapped: 17915904 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 259 ms_handle_reset con 0x55a68dbd0800 session 0x55a68a932700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140910592 unmapped: 18571264 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1968810 data_alloc: 234881024 data_used: 25887687
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140910592 unmapped: 18571264 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 260 heartbeat osd_stat(store_statfs(0x4f90be000/0x0/0x4ffc00000, data 0x2c56678/0x2dcd000, compress 0x0/0x0/0x0, omap 0x200f3, meta 0x3d4ff0d), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 260 ms_handle_reset con 0x55a68dbd1800 session 0x55a68cdf2540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 260 heartbeat osd_stat(store_statfs(0x4f90b9000/0x0/0x4ffc00000, data 0x2c582e9/0x2dd1000, compress 0x0/0x0/0x0, omap 0x20203, meta 0x3d4fdfd), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141041664 unmapped: 18440192 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 19120128 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 262 ms_handle_reset con 0x55a68dbd1000 session 0x55a68cdfe540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 19120128 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 262 ms_handle_reset con 0x55a68d446400 session 0x55a68c3208c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f90ae000/0x0/0x4ffc00000, data 0x2c5ba14/0x2dd8000, compress 0x0/0x0/0x0, omap 0x263ff, meta 0x3d49c01), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140361728 unmapped: 19120128 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.668067932s of 10.085347176s, submitted: 60
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1979871 data_alloc: 234881024 data_used: 25887750
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 262 ms_handle_reset con 0x55a68ce8e400 session 0x55a68d0ec700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 19873792 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 262 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b26d6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139608064 unmapped: 19873792 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 263 ms_handle_reset con 0x55a68dbd0800 session 0x55a68c321180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 19865600 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 264 ms_handle_reset con 0x55a68dbd1400 session 0x55a68c28ca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f90ab000/0x0/0x4ffc00000, data 0x2c5f197/0x2ddd000, compress 0x0/0x0/0x0, omap 0x2669f, meta 0x3d49961), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 264 ms_handle_reset con 0x55a68ce8e000 session 0x55a68ce99a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 19865600 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 19865600 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f90ab000/0x0/0x4ffc00000, data 0x2c5f197/0x2ddd000, compress 0x0/0x0/0x0, omap 0x2669f, meta 0x3d49961), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1988977 data_alloc: 234881024 data_used: 25892332
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 19865600 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 265 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf3dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139616256 unmapped: 19865600 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 265 ms_handle_reset con 0x55a68d446400 session 0x55a68cdf28c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139255808 unmapped: 20226048 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139255808 unmapped: 20226048 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 266 ms_handle_reset con 0x55a68dbd1000 session 0x55a68fcfbc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139255808 unmapped: 20226048 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.082992554s of 10.337272644s, submitted: 27
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 266 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 267 ms_handle_reset con 0x55a68f588000 session 0x55a68b599a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1994525 data_alloc: 234881024 data_used: 25892332
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 267 ms_handle_reset con 0x55a68ce8e400 session 0x55a68cedaa80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139255808 unmapped: 20226048 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f90a7000/0x0/0x4ffc00000, data 0x2c629ed/0x2de3000, compress 0x0/0x0/0x0, omap 0x267ef, meta 0x3d49811), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139255808 unmapped: 20226048 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139264000 unmapped: 20217856 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 267 handle_osd_map epochs [269,269], i have 267, src has [1,269]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 267 handle_osd_map epochs [268,269], i have 267, src has [1,269]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68ce8e000 session 0x55a689e04000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68d446400 session 0x55a68c3201c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68dbd1400 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139264000 unmapped: 20217856 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68ce8e000 session 0x55a68f93c000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a689eb2c00 session 0x55a68c320e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f909d000/0x0/0x4ffc00000, data 0x2c67d29/0x2ded000, compress 0x0/0x0/0x0, omap 0x26975, meta 0x3d4968b), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139264000 unmapped: 20217856 heap: 159481856 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2041997 data_alloc: 234881024 data_used: 25892917
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68ce8e400 session 0x55a68f8af6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68f588000 session 0x55a68cdf2000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68f588400 session 0x55a68a801880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 27664384 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a689eb2c00 session 0x55a68edd6fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 ms_handle_reset con 0x55a68ce8e000 session 0x55a690504c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140214272 unmapped: 27664384 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140230656 unmapped: 27648000 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 28385280 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 270 ms_handle_reset con 0x55a68d446400 session 0x55a68cdf3340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f8afc000/0x0/0x4ffc00000, data 0x320891e/0x338e000, compress 0x0/0x0/0x0, omap 0x26b55, meta 0x3d494ab), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f8afd000/0x0/0x4ffc00000, data 0x32088bc/0x338d000, compress 0x0/0x0/0x0, omap 0x26b55, meta 0x3d494ab), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 28385280 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2042049 data_alloc: 234881024 data_used: 25892819
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 28385280 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 270 ms_handle_reset con 0x55a68d446000 session 0x55a68f8ae380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 270 ms_handle_reset con 0x55a68ce8e400 session 0x55a68b26ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 28385280 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 270 heartbeat osd_stat(store_statfs(0x4f8afd000/0x0/0x4ffc00000, data 0x32088bc/0x338d000, compress 0x0/0x0/0x0, omap 0x26b55, meta 0x3d494ab), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583072662s of 12.355546951s, submitted: 63
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 28377088 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 271 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68a801180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 271 ms_handle_reset con 0x55a68dbd0400 session 0x55a68c28ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 28377088 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 28377088 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2044333 data_alloc: 234881024 data_used: 25892835
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 28377088 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 28377088 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f8afb000/0x0/0x4ffc00000, data 0x320a39e/0x338f000, compress 0x0/0x0/0x0, omap 0x26bc1, meta 0x3d4943f), peers [0,1] op hist [0,0,0,0,0,0,0,0,2])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 138100736 unmapped: 29777920 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 271 handle_osd_map epochs [271,272], i have 272, src has [1,272]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f9935000/0x0/0x4ffc00000, data 0x23cee74/0x2555000, compress 0x0/0x0/0x0, omap 0x26d7d, meta 0x3d49283), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1964213 data_alloc: 234881024 data_used: 24986083
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f9935000/0x0/0x4ffc00000, data 0x23cee74/0x2555000, compress 0x0/0x0/0x0, omap 0x26d7d, meta 0x3d49283), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.665476322s of 10.079894066s, submitted: 32
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 ms_handle_reset con 0x55a68d446000 session 0x55a68ce996c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1963588 data_alloc: 234881024 data_used: 24986083
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 30375936 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x23cee64/0x2554000, compress 0x0/0x0/0x0, omap 0x26d7d, meta 0x3d49283), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142131200 unmapped: 25747456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f956e000/0x0/0x4ffc00000, data 0x2796e64/0x291c000, compress 0x0/0x0/0x0, omap 0x26d7d, meta 0x3d49283), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,19,22])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143179776 unmapped: 24698880 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 140632064 unmapped: 27246592 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2022472 data_alloc: 234881024 data_used: 24986083
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 24207360 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f904c000/0x0/0x4ffc00000, data 0x2cbae64/0x2e40000, compress 0x0/0x0/0x0, omap 0x26d7d, meta 0x3d49283), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141393920 unmapped: 26484736 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 ms_handle_reset con 0x55a68f588000 session 0x55a68cdf3a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 26271744 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.354522705s of 11.108489037s, submitted: 85
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141664256 unmapped: 26214400 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 26271744 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 273 heartbeat osd_stat(store_statfs(0x4f8f1c000/0x0/0x4ffc00000, data 0x2de8a49/0x2f6e000, compress 0x0/0x0/0x0, omap 0x26d7d, meta 0x3d49283), peers [0,1] op hist [0,0,0,0,0,0,2])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2031002 data_alloc: 234881024 data_used: 26251747
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141402112 unmapped: 26476544 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f8f0e000/0x0/0x4ffc00000, data 0x2df3690/0x2f7a000, compress 0x0/0x0/0x0, omap 0x26ecd, meta 0x3d49133), peers [0,1] op hist [0,0,0,0,0,0,0,1,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141828096 unmapped: 26050560 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150929408 unmapped: 16949248 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146825216 unmapped: 21053440 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 heartbeat osd_stat(store_statfs(0x4f872a000/0x0/0x4ffc00000, data 0x35d82d7/0x3760000, compress 0x0/0x0/0x0, omap 0x2701d, meta 0x3d48fe3), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2,1,3])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 ms_handle_reset con 0x55a68d446400 session 0x55a68fdf6000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 ms_handle_reset con 0x55a68f588800 session 0x55a68fdf7a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 ms_handle_reset con 0x55a68d446400 session 0x55a68b533500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 ms_handle_reset con 0x55a68d446000 session 0x55a68b5336c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143097856 unmapped: 24780800 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 ms_handle_reset con 0x55a68dbd0400 session 0x55a68f8aec40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 ms_handle_reset con 0x55a68f588000 session 0x55a68f8afc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 heartbeat osd_stat(store_statfs(0x4f872a000/0x0/0x4ffc00000, data 0x35d82d7/0x3760000, compress 0x0/0x0/0x0, omap 0x2701d, meta 0x3d48fe3), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2080752 data_alloc: 234881024 data_used: 26345939
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143097856 unmapped: 24780800 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143097856 unmapped: 24780800 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 heartbeat osd_stat(store_statfs(0x4f872d000/0x0/0x4ffc00000, data 0x35d82c7/0x375f000, compress 0x0/0x0/0x0, omap 0x2701d, meta 0x3d48fe3), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68a823180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143097856 unmapped: 24780800 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143097856 unmapped: 24780800 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.419381142s of 10.241317749s, submitted: 61
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 276 ms_handle_reset con 0x55a68d446000 session 0x55a68a3fb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143237120 unmapped: 24641536 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2084726 data_alloc: 234881024 data_used: 26345939
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143245312 unmapped: 24633344 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146030592 unmapped: 21848064 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 276 heartbeat osd_stat(store_statfs(0x4f8705000/0x0/0x4ffc00000, data 0x35fedb9/0x3787000, compress 0x0/0x0/0x0, omap 0x27089, meta 0x3d48f77), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146030592 unmapped: 21848064 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146046976 unmapped: 21831680 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 heartbeat osd_stat(store_statfs(0x4f8705000/0x0/0x4ffc00000, data 0x35fedb9/0x3787000, compress 0x0/0x0/0x0, omap 0x27089, meta 0x3d48f77), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146079744 unmapped: 21798912 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2134620 data_alloc: 251658240 data_used: 34283235
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146079744 unmapped: 21798912 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 ms_handle_reset con 0x55a68f588800 session 0x55a68fdf6a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 ms_handle_reset con 0x55a68f588c00 session 0x55a68a932fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 147390464 unmapped: 20488192 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 heartbeat osd_stat(store_statfs(0x4f8700000/0x0/0x4ffc00000, data 0x360088f/0x378a000, compress 0x0/0x0/0x0, omap 0x27245, meta 0x3d48dbb), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 147447808 unmapped: 20430848 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 ms_handle_reset con 0x55a68f589000 session 0x55a68b63c000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 147603456 unmapped: 20275200 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 147603456 unmapped: 20275200 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 heartbeat osd_stat(store_statfs(0x4f86fc000/0x0/0x4ffc00000, data 0x360589f/0x3790000, compress 0x0/0x0/0x0, omap 0x27245, meta 0x3d48dbb), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 ms_handle_reset con 0x55a68d446000 session 0x55a68c320a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.685840607s of 11.794902802s, submitted: 29
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2133216 data_alloc: 251658240 data_used: 35331811
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68a3fb6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 18964480 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 ms_handle_reset con 0x55a68f588800 session 0x55a68fdf6380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 ms_handle_reset con 0x55a68f588c00 session 0x55a68b63dc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 153493504 unmapped: 14385152 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 heartbeat osd_stat(store_statfs(0x4f7796000/0x0/0x4ffc00000, data 0x456389e/0x46ee000, compress 0x0/0x0/0x0, omap 0x27461, meta 0x3d48b9f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a68f589400 session 0x55a68d0ece00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151625728 unmapped: 16252928 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b188380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a68ce8e000 session 0x55a68ce99880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a68d446000 session 0x55a68f8ae000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a68dbd0c00 session 0x55a690505a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 18644992 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a68f588800 session 0x55a68fdf68c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 heartbeat osd_stat(store_statfs(0x4f858a000/0x0/0x4ffc00000, data 0x3544481/0x36cf000, compress 0x0/0x0/0x0, omap 0x27c11, meta 0x3d483ef), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150282240 unmapped: 17596416 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a689eb2c00 session 0x55a68edd6a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 ms_handle_reset con 0x55a68ce8e000 session 0x55a68fdf6e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2111194 data_alloc: 251658240 data_used: 29133011
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 17588224 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 17588224 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 heartbeat osd_stat(store_statfs(0x4f87be000/0x0/0x4ffc00000, data 0x3544472/0x36ce000, compress 0x0/0x0/0x0, omap 0x27c11, meta 0x3d483ef), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151339008 unmapped: 16539648 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 279 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68b188a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 279 ms_handle_reset con 0x55a68d446000 session 0x55a68cedb6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 279 ms_handle_reset con 0x55a68f588c00 session 0x55a68b5328c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 279 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a822a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2115493 data_alloc: 251658240 data_used: 29141203
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 279 heartbeat osd_stat(store_statfs(0x4f87ba000/0x0/0x4ffc00000, data 0x3546065/0x36d1000, compress 0x0/0x0/0x0, omap 0x27d61, meta 0x3d4829f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.422993660s of 12.318307877s, submitted: 157
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 279 heartbeat osd_stat(store_statfs(0x4f87b8000/0x0/0x4ffc00000, data 0x3547065/0x36d2000, compress 0x0/0x0/0x0, omap 0x27d61, meta 0x3d4829f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2116221 data_alloc: 251658240 data_used: 29157587
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151347200 unmapped: 16531456 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151363584 unmapped: 16515072 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 280 heartbeat osd_stat(store_statfs(0x4f87b5000/0x0/0x4ffc00000, data 0x3548c58/0x36d5000, compress 0x0/0x0/0x0, omap 0x27d61, meta 0x3d4829f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 280 ms_handle_reset con 0x55a68d446000 session 0x55a68edd68c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 280 ms_handle_reset con 0x55a68ce8e000 session 0x55a68b189340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151396352 unmapped: 16482304 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 280 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68a933a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 281 ms_handle_reset con 0x55a68f588c00 session 0x55a68ce98fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 16646144 heap: 167878656 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 281 ms_handle_reset con 0x55a68ce8e000 session 0x55a68b6616c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 281 ms_handle_reset con 0x55a689eb2c00 session 0x55a690093dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 281 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68fcfa000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 281 ms_handle_reset con 0x55a68d446000 session 0x55a68b26c700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166469632 unmapped: 26730496 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 281 handle_osd_map epochs [281,282], i have 282, src has [1,282]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 282 ms_handle_reset con 0x55a68f588c00 session 0x55a68a823c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2282141 data_alloc: 251658240 data_used: 41908435
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166486016 unmapped: 26714112 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0ec1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166510592 unmapped: 26689536 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68d446000 session 0x55a68b63ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68ce8e000 session 0x55a690093500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68a8221c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68f589800 session 0x55a6900936c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166305792 unmapped: 26894336 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f71bb000/0x0/0x4ffc00000, data 0x4b3b165/0x4ccd000, compress 0x0/0x0/0x0, omap 0x2c9b7, meta 0x3d43649), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.621680260s of 10.604098320s, submitted: 73
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a3fb180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 31375360 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68ce8e000 session 0x55a690504e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 31375360 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68d446000 session 0x55a68c8dec40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68d0c7a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2283280 data_alloc: 251658240 data_used: 41908435
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 31375360 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 31375360 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 heartbeat osd_stat(store_statfs(0x4f7586000/0x0/0x4ffc00000, data 0x4773113/0x4905000, compress 0x0/0x0/0x0, omap 0x2cc24, meta 0x3d433dc), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 ms_handle_reset con 0x55a68b620800 session 0x55a68cdf2380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161857536 unmapped: 31342592 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 ms_handle_reset con 0x55a68f589800 session 0x55a68ce98a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 ms_handle_reset con 0x55a68b620800 session 0x55a68b661dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b26d500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 ms_handle_reset con 0x55a68ce8e000 session 0x55a68ce99340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 ms_handle_reset con 0x55a68d446000 session 0x55a68c8dfc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0c6c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161865728 unmapped: 31334400 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 ms_handle_reset con 0x55a68f589c00 session 0x55a68fdf7180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 heartbeat osd_stat(store_statfs(0x4f757f000/0x0/0x4ffc00000, data 0x4774da4/0x490b000, compress 0x0/0x0/0x0, omap 0x2d427, meta 0x3d42bd9), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161865728 unmapped: 31334400 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 ms_handle_reset con 0x55a68b620800 session 0x55a68b661a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2295064 data_alloc: 251658240 data_used: 41908549
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161865728 unmapped: 31334400 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 ms_handle_reset con 0x55a68ce8e000 session 0x55a690093340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 ms_handle_reset con 0x55a68f589800 session 0x55a690092fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161865728 unmapped: 31334400 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b532fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 ms_handle_reset con 0x55a68b620800 session 0x55a68c28c700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161882112 unmapped: 31318016 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 ms_handle_reset con 0x55a68f589800 session 0x55a68cedae00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 ms_handle_reset con 0x55a68f589c00 session 0x55a68f8afdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.457339287s of 10.018418312s, submitted: 56
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 ms_handle_reset con 0x55a68ce8e000 session 0x55a68ceda8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68fcfa700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161882112 unmapped: 31318016 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 ms_handle_reset con 0x55a689eb2c00 session 0x55a68ce98e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 ms_handle_reset con 0x55a68f589c00 session 0x55a68fcfa1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 ms_handle_reset con 0x55a68b620c00 session 0x55a68fdf7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161882112 unmapped: 31318016 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 ms_handle_reset con 0x55a68ce8f800 session 0x55a68b598700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 ms_handle_reset con 0x55a689eb2c00 session 0x55a68ce98700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 heartbeat osd_stat(store_statfs(0x4f757b000/0x0/0x4ffc00000, data 0x47785f6/0x490f000, compress 0x0/0x0/0x0, omap 0x2d9b7, meta 0x3d42649), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2331215 data_alloc: 251658240 data_used: 42542487
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162406400 unmapped: 30793728 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162406400 unmapped: 30793728 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162406400 unmapped: 30793728 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 ms_handle_reset con 0x55a68b620c00 session 0x55a68a8001c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 heartbeat osd_stat(store_statfs(0x4f757c000/0x0/0x4ffc00000, data 0x4778606/0x4910000, compress 0x0/0x0/0x0, omap 0x2d9b7, meta 0x3d42649), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68b26c700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159629312 unmapped: 33570816 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159629312 unmapped: 33570816 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 ms_handle_reset con 0x55a68f589c00 session 0x55a68b533880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2327139 data_alloc: 251658240 data_used: 42543511
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159629312 unmapped: 33570816 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159629312 unmapped: 33570816 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 ms_handle_reset con 0x55a68ce8fc00 session 0x55a68cdf2000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159637504 unmapped: 33562624 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 ms_handle_reset con 0x55a690ae2400 session 0x55a68b661880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 ms_handle_reset con 0x55a689eb2c00 session 0x55a68fcfa1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.580755234s of 10.038787842s, submitted: 66
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68b620c00 session 0x55a68a822540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158588928 unmapped: 34611200 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f6fbe000/0x0/0x4ffc00000, data 0x4d2fc40/0x4ecc000, compress 0x0/0x0/0x0, omap 0x2d9b1, meta 0x3d4264f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158621696 unmapped: 34578432 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68c8df340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2390079 data_alloc: 251658240 data_used: 43832215
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158736384 unmapped: 34463744 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68f589c00 session 0x55a68b189dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a3faa80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159301632 unmapped: 33898496 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f6fba000/0x0/0x4ffc00000, data 0x4d34c40/0x4ed1000, compress 0x0/0x0/0x0, omap 0x2d892, meta 0x3d4276e), peers [0,1] op hist [0,1,1,5])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68b620c00 session 0x55a68f8afa40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 33374208 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68b599340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a690ae2400 session 0x55a68d0ec000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a690ae2800 session 0x55a68b598380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68b620c00 session 0x55a68b5321c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdffc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68c28c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159965184 unmapped: 33234944 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x4dd8ca2/0x4f76000, compress 0x0/0x0/0x0, omap 0x2d892, meta 0x3d4276e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a690ae2400 session 0x55a68fcfba40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a690ae2c00 session 0x55a68b63d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159997952 unmapped: 33202176 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf2a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2400547 data_alloc: 251658240 data_used: 44071319
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 32899072 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165552128 unmapped: 27648000 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f6ef2000/0x0/0x4ffc00000, data 0x4dfcca2/0x4f9a000, compress 0x0/0x0/0x0, omap 0x2d892, meta 0x3d4276e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165584896 unmapped: 27615232 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68b620c00 session 0x55a68c7d7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a68dbd0c00 session 0x55a690092000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 heartbeat osd_stat(store_statfs(0x4f6ef2000/0x0/0x4ffc00000, data 0x4dfcca2/0x4f9a000, compress 0x0/0x0/0x0, omap 0x2d892, meta 0x3d4276e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.795190811s of 10.055949211s, submitted: 72
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a690ae2400 session 0x55a68a823500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163323904 unmapped: 29876224 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163323904 unmapped: 29876224 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2364493 data_alloc: 251658240 data_used: 44069271
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163323904 unmapped: 29876224 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 ms_handle_reset con 0x55a690ae3800 session 0x55a68a3fafc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 288 handle_osd_map epochs [288,289], i have 289, src has [1,289]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 289 ms_handle_reset con 0x55a690ae3c00 session 0x55a68b189340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 28631040 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 289 heartbeat osd_stat(store_statfs(0x4f7568000/0x0/0x4ffc00000, data 0x47828b8/0x4922000, compress 0x0/0x0/0x0, omap 0x2da22, meta 0x3d425de), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 289 ms_handle_reset con 0x55a689eb2c00 session 0x55a68f93ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164569088 unmapped: 28631040 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164659200 unmapped: 28540928 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 290 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68b1836c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 290 ms_handle_reset con 0x55a68b620c00 session 0x55a689e04000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 28499968 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 291 ms_handle_reset con 0x55a690ae2400 session 0x55a68c320380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 291 heartbeat osd_stat(store_statfs(0x4f7563000/0x0/0x4ffc00000, data 0x4784581/0x4928000, compress 0x0/0x0/0x0, omap 0x2db26, meta 0x3d424da), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2383283 data_alloc: 251658240 data_used: 44073882
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 28499968 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164708352 unmapped: 28491776 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 292 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a822000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164708352 unmapped: 28491776 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 292 ms_handle_reset con 0x55a68b620c00 session 0x55a68b183a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.230588913s of 10.057716370s, submitted: 94
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164716544 unmapped: 28483584 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 293 ms_handle_reset con 0x55a690ae3c00 session 0x55a68b1888c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 293 heartbeat osd_stat(store_statfs(0x4f755b000/0x0/0x4ffc00000, data 0x4789886/0x492e000, compress 0x0/0x0/0x0, omap 0x2de7e, meta 0x3d42182), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164732928 unmapped: 28467200 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 293 ms_handle_reset con 0x55a68dbd0c00 session 0x55a689e05180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 293 ms_handle_reset con 0x55a6916e4000 session 0x55a68cdfe700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2382927 data_alloc: 251658240 data_used: 44078556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164749312 unmapped: 28450816 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164749312 unmapped: 28450816 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 293 heartbeat osd_stat(store_statfs(0x4f7561000/0x0/0x4ffc00000, data 0x47897b2/0x492b000, compress 0x0/0x0/0x0, omap 0x2de7e, meta 0x3d42182), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164749312 unmapped: 28450816 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 293 ms_handle_reset con 0x55a6916e4000 session 0x55a6905041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165822464 unmapped: 27377664 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 27361280 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f7562000/0x0/0x4ffc00000, data 0x4789750/0x492a000, compress 0x0/0x0/0x0, omap 0x2de7e, meta 0x3d42182), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2385824 data_alloc: 251658240 data_used: 44078556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 294 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b660a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 27361280 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 294 heartbeat osd_stat(store_statfs(0x4f755d000/0x0/0x4ffc00000, data 0x478b226/0x492d000, compress 0x0/0x0/0x0, omap 0x2e07e, meta 0x3d41f82), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 27361280 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 294 ms_handle_reset con 0x55a68b620c00 session 0x55a68a3fb6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165838848 unmapped: 27361280 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 294 ms_handle_reset con 0x55a690ae3c00 session 0x55a68f93d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165863424 unmapped: 27336704 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.258124352s of 10.622159004s, submitted: 61
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 295 ms_handle_reset con 0x55a6916e4800 session 0x55a68b188700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 295 ms_handle_reset con 0x55a689eb2c00 session 0x55a68ce98000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 27795456 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 296 ms_handle_reset con 0x55a6916e4400 session 0x55a68cdffdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 296 ms_handle_reset con 0x55a68b620c00 session 0x55a68a801880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 296 heartbeat osd_stat(store_statfs(0x4f7356000/0x0/0x4ffc00000, data 0x498e08b/0x4b34000, compress 0x0/0x0/0x0, omap 0x2e1a1, meta 0x3d41e5f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 296 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68b183a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2416727 data_alloc: 251658240 data_used: 44787692
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169050112 unmapped: 24150016 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a690ae3c00 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a6916e4000 session 0x55a68ce99880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169140224 unmapped: 24059904 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a68b620800 session 0x55a68f93da40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a68f589800 session 0x55a690505880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a689eb2c00 session 0x55a690093c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68a3fbdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a932fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a68b620c00 session 0x55a68cdf36c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 ms_handle_reset con 0x55a68b620800 session 0x55a68a9336c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166928384 unmapped: 26271744 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166936576 unmapped: 26263552 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 298 ms_handle_reset con 0x55a6916e4000 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 298 ms_handle_reset con 0x55a68f589800 session 0x55a68b6601c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 298 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a822a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 298 ms_handle_reset con 0x55a68b620800 session 0x55a68edd7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166993920 unmapped: 26206208 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 299 ms_handle_reset con 0x55a6916e4000 session 0x55a68cdfea80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 299 ms_handle_reset con 0x55a690ae3c00 session 0x55a68ce99c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 299 heartbeat osd_stat(store_statfs(0x4f734e000/0x0/0x4ffc00000, data 0x498e796/0x4b3c000, compress 0x0/0x0/0x0, omap 0x2e0e6, meta 0x3d41f1a), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 299 ms_handle_reset con 0x55a6916e4400 session 0x55a68b188000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2420201 data_alloc: 268435456 data_used: 45938238
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 299 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b660000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 167010304 unmapped: 26189824 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 299 ms_handle_reset con 0x55a68b620800 session 0x55a68cdf3dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 300 ms_handle_reset con 0x55a68b620c00 session 0x55a68a822e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 300 ms_handle_reset con 0x55a690ae3c00 session 0x55a68c769500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 167018496 unmapped: 26181632 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f754b000/0x0/0x4ffc00000, data 0x47911c2/0x493d000, compress 0x0/0x0/0x0, omap 0x2e1fe, meta 0x3d41e02), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 301 ms_handle_reset con 0x55a6916e4c00 session 0x55a68f8ae8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 301 ms_handle_reset con 0x55a6916e4000 session 0x55a690092700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 301 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cedb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 301 heartbeat osd_stat(store_statfs(0x4f876d000/0x0/0x4ffc00000, data 0x356d9fc/0x3718000, compress 0x0/0x0/0x0, omap 0x2e6b6, meta 0x3d4194a), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 301 ms_handle_reset con 0x55a68b620800 session 0x55a68fdf76c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160030720 unmapped: 33169408 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 ms_handle_reset con 0x55a68b620c00 session 0x55a689e04c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 ms_handle_reset con 0x55a690ae3c00 session 0x55a68a3fa540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf2fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159883264 unmapped: 33316864 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f8768000/0x0/0x4ffc00000, data 0x3571217/0x3720000, compress 0x0/0x0/0x0, omap 0x2deaa, meta 0x3d42156), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f876a000/0x0/0x4ffc00000, data 0x35711a6/0x371e000, compress 0x0/0x0/0x0, omap 0x2deaa, meta 0x3d42156), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159883264 unmapped: 33316864 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 ms_handle_reset con 0x55a68d446400 session 0x55a68cdff500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 ms_handle_reset con 0x55a68dbd0400 session 0x55a68d0ec8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.510498047s of 11.210500717s, submitted: 245
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 ms_handle_reset con 0x55a68b620c00 session 0x55a68b660e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2269161 data_alloc: 251658240 data_used: 34578399
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 heartbeat osd_stat(store_statfs(0x4f876a000/0x0/0x4ffc00000, data 0x35711a6/0x371e000, compress 0x0/0x0/0x0, omap 0x2dfd1, meta 0x3d4202f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142057472 unmapped: 51142656 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 ms_handle_reset con 0x55a6916e4000 session 0x55a68b599340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 303 handle_osd_map epochs [304,304], i have 304, src has [1,304]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 304 ms_handle_reset con 0x55a689eb2c00 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142057472 unmapped: 51142656 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 304 heartbeat osd_stat(store_statfs(0x4f9be6000/0x0/0x4ffc00000, data 0x20f51a6/0x22a2000, compress 0x0/0x0/0x0, omap 0x2dfd1, meta 0x3d4202f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 304 ms_handle_reset con 0x55a68b620c00 session 0x55a68cedafc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 305 ms_handle_reset con 0x55a68b620800 session 0x55a689e056c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 305 ms_handle_reset con 0x55a68d446400 session 0x55a690505880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142065664 unmapped: 51134464 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f9be5000/0x0/0x4ffc00000, data 0x20f6ded/0x22a5000, compress 0x0/0x0/0x0, omap 0x2de35, meta 0x3d421cb), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 305 ms_handle_reset con 0x55a6916e5000 session 0x55a68fdf61c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 305 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b533180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142049280 unmapped: 51150848 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 305 handle_osd_map epochs [306,306], i have 306, src has [1,306]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 306 ms_handle_reset con 0x55a68b620800 session 0x55a68b533c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142065664 unmapped: 51134464 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 306 handle_osd_map epochs [306,307], i have 307, src has [1,307]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 307 ms_handle_reset con 0x55a68b620c00 session 0x55a68b188380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 307 ms_handle_reset con 0x55a68dbd0400 session 0x55a68b1836c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2085794 data_alloc: 234881024 data_used: 13100972
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142065664 unmapped: 51134464 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f9bde000/0x0/0x4ffc00000, data 0x20fa838/0x22ac000, compress 0x0/0x0/0x0, omap 0x2dfd1, meta 0x3d4202f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142065664 unmapped: 51134464 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 307 ms_handle_reset con 0x55a6916e5400 session 0x55a68c320380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 307 heartbeat osd_stat(store_statfs(0x4f9bd9000/0x0/0x4ffc00000, data 0x20fc48f/0x22af000, compress 0x0/0x0/0x0, omap 0x2dfd1, meta 0x3d4202f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142106624 unmapped: 51093504 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 307 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a8236c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 308 ms_handle_reset con 0x55a68b620800 session 0x55a68f93da40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 308 ms_handle_reset con 0x55a68dbd0400 session 0x55a68f8afa40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 308 ms_handle_reset con 0x55a6916e5800 session 0x55a68d0c6540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 51003392 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 308 ms_handle_reset con 0x55a68b620c00 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 309 ms_handle_reset con 0x55a68d446400 session 0x55a68b660000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 51003392 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2094834 data_alloc: 234881024 data_used: 13100972
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 51003392 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.607498169s of 10.792164803s, submitted: 85
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 309 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b661180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 310 ms_handle_reset con 0x55a68b620800 session 0x55a68f93c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 310 ms_handle_reset con 0x55a6916e5800 session 0x55a689e04000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f9bd0000/0x0/0x4ffc00000, data 0x21019a2/0x22ba000, compress 0x0/0x0/0x0, omap 0x2e271, meta 0x3d41d8f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 310 ms_handle_reset con 0x55a6916e5c00 session 0x55a690093500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 51675136 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 311 ms_handle_reset con 0x55a68dbd0400 session 0x55a68d0ec380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 311 ms_handle_reset con 0x55a68b620c00 session 0x55a68c8defc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 51675136 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 312 ms_handle_reset con 0x55a689eb2c00 session 0x55a68fdf7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141524992 unmapped: 51675136 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 313 ms_handle_reset con 0x55a68b620800 session 0x55a68cedb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 313 ms_handle_reset con 0x55a68d446400 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 51650560 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2105150 data_alloc: 234881024 data_used: 13101557
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 313 heartbeat osd_stat(store_statfs(0x4f9bc6000/0x0/0x4ffc00000, data 0x2106e13/0x22c2000, compress 0x0/0x0/0x0, omap 0x2e40d, meta 0x3d41bf3), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 51650560 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 314 ms_handle_reset con 0x55a689eb2c00 session 0x55a68a801880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 51609600 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 51609600 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 315 ms_handle_reset con 0x55a68b620800 session 0x55a68b661180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 51601408 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 51601408 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2112895 data_alloc: 234881024 data_used: 13102865
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 51568640 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f9bc5000/0x0/0x4ffc00000, data 0x210a6c9/0x22c7000, compress 0x0/0x0/0x0, omap 0x2e445, meta 0x3d41bbb), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.062288284s of 10.393405914s, submitted: 85
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141443072 unmapped: 51757056 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141467648 unmapped: 51732480 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 319 ms_handle_reset con 0x55a68b620c00 session 0x55a68b599340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 319 ms_handle_reset con 0x55a68dbd0400 session 0x55a689e04000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 51650560 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 51650560 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2121926 data_alloc: 234881024 data_used: 13103294
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f9bb3000/0x0/0x4ffc00000, data 0x21133e4/0x22d3000, compress 0x0/0x0/0x0, omap 0x2cdbd, meta 0x3d43243), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 51650560 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 320 handle_osd_map epochs [320,321], i have 321, src has [1,321]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 321 ms_handle_reset con 0x55a6916e5800 session 0x55a690093880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141426688 unmapped: 51773440 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141426688 unmapped: 51773440 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 321 heartbeat osd_stat(store_statfs(0x4f9bb4000/0x0/0x4ffc00000, data 0x2115057/0x22d6000, compress 0x0/0x0/0x0, omap 0x2cf59, meta 0x3d430a7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 322 ms_handle_reset con 0x55a689eb2c00 session 0x55a690093500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141426688 unmapped: 51773440 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141426688 unmapped: 51773440 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2128464 data_alloc: 234881024 data_used: 13105133
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 141426688 unmapped: 51773440 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 322 ms_handle_reset con 0x55a68b620c00 session 0x55a68b189180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.967049599s of 10.222849846s, submitted: 121
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 323 ms_handle_reset con 0x55a68dbd0400 session 0x55a68a933180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 323 ms_handle_reset con 0x55a68d446000 session 0x55a690092700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145489920 unmapped: 47710208 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 323 ms_handle_reset con 0x55a68dbd1400 session 0x55a68b63dc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 324 ms_handle_reset con 0x55a68dbd1000 session 0x55a68cedb6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 324 ms_handle_reset con 0x55a68b620800 session 0x55a68f8aea80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145539072 unmapped: 47661056 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a689eb2c00 session 0x55a68cdf2a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68b620c00 session 0x55a68a800700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f9ba2000/0x0/0x4ffc00000, data 0x211c03c/0x22e4000, compress 0x0/0x0/0x0, omap 0x2e879, meta 0x3d41787), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 144449536 unmapped: 48750592 heap: 193200128 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68dbd0c00 session 0x55a68b26d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68dbd0400 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157171712 unmapped: 44433408 heap: 201605120 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68b620800 session 0x55a68edd7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68b620c00 session 0x55a690092000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2331527 data_alloc: 234881024 data_used: 19396987
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 144621568 unmapped: 56983552 heap: 201605120 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68dbd1c00 session 0x55a68fdf6380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68dbd1000 session 0x55a68cdf36c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150405120 unmapped: 51200000 heap: 201605120 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68b620c00 session 0x55a68f93dc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68b620800 session 0x55a68edd6fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 144203776 unmapped: 57401344 heap: 201605120 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 ms_handle_reset con 0x55a68dbd0400 session 0x55a68a8228c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f1dc3000/0x0/0x4ffc00000, data 0x9f0103c/0xa0c9000, compress 0x0/0x0/0x0, omap 0x2eb4e, meta 0x3d414b2), peers [0,1] op hist [0,0,0,0,0,0,0,4])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 144236544 unmapped: 57368576 heap: 201605120 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 144252928 unmapped: 57352192 heap: 201605120 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 325 handle_osd_map epochs [325,326], i have 326, src has [1,326]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074837 data_alloc: 234881024 data_used: 19396889
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 153739264 unmapped: 47865856 heap: 201605120 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 326 heartbeat osd_stat(store_statfs(0x4ef5be000/0x0/0x4ffc00000, data 0xc702c83/0xc8cc000, compress 0x0/0x0/0x0, omap 0x2ed16, meta 0x3d412ea), peers [0,1] op hist [0,0,0,0,0,2,1,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 326 ms_handle_reset con 0x55a68dbd1c00 session 0x55a68a932fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.691368580s of 10.025777817s, submitted: 143
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 153804800 unmapped: 52002816 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 326 heartbeat osd_stat(store_statfs(0x4eddbe000/0x0/0x4ffc00000, data 0xdf02c83/0xe0cc000, compress 0x0/0x0/0x0, omap 0x2ef17, meta 0x3d410e9), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145506304 unmapped: 60301312 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 326 ms_handle_reset con 0x55a68dbd0000 session 0x55a68b533180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160595968 unmapped: 45211648 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 327 ms_handle_reset con 0x55a68b620800 session 0x55a68b6601c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 327 ms_handle_reset con 0x55a689eb2c00 session 0x55a68f8af340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 327 ms_handle_reset con 0x55a68d446000 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 327 ms_handle_reset con 0x55a68dbd0000 session 0x55a68fcfb180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 327 heartbeat osd_stat(store_statfs(0x4ea9ba000/0x0/0x4ffc00000, data 0x113048a2/0x114d0000, compress 0x0/0x0/0x0, omap 0x2f082, meta 0x3d40f7e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 327 handle_osd_map epochs [328,328], i have 328, src has [1,328]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 328 heartbeat osd_stat(store_statfs(0x4e9009000/0x0/0x4ffc00000, data 0x12cb24b1/0x12e7f000, compress 0x0/0x0/0x0, omap 0x2f21e, meta 0x3d40de2), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 144007168 unmapped: 61800448 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 328 handle_osd_map epochs [328,329], i have 329, src has [1,329]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 ms_handle_reset con 0x55a68b620c00 session 0x55a68fdf61c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617052 data_alloc: 234881024 data_used: 19396889
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143851520 unmapped: 61956096 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 ms_handle_reset con 0x55a689eb2c00 session 0x55a68d0ed880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 heartbeat osd_stat(store_statfs(0x4e9006000/0x0/0x4ffc00000, data 0x12cb40a2/0x12e80000, compress 0x0/0x0/0x0, omap 0x2f0b3, meta 0x3d40f4d), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 ms_handle_reset con 0x55a68d34c000 session 0x55a68c8df500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 62136320 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 ms_handle_reset con 0x55a68d34d000 session 0x55a68b660700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 ms_handle_reset con 0x55a68d34c400 session 0x55a68a823500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 62136320 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 ms_handle_reset con 0x55a68d34d400 session 0x55a689e04c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 ms_handle_reset con 0x55a68d34dc00 session 0x55a68a823340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 330 ms_handle_reset con 0x55a68d34c000 session 0x55a68d0c61c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 330 ms_handle_reset con 0x55a68d34d400 session 0x55a68d0c7180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 60604416 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 331 ms_handle_reset con 0x55a68d34c400 session 0x55a68b533500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 331 ms_handle_reset con 0x55a68b297400 session 0x55a68b63da40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 331 ms_handle_reset con 0x55a68d34c000 session 0x55a68b1836c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 331 ms_handle_reset con 0x55a68d34c400 session 0x55a68f8af500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 331 heartbeat osd_stat(store_statfs(0x4e88a0000/0x0/0x4ffc00000, data 0x13419867/0x135ea000, compress 0x0/0x0/0x0, omap 0x30609, meta 0x3d3f9f7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 331 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b660e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 60645376 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 332 ms_handle_reset con 0x55a68d34d400 session 0x55a68f8ae700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 332 ms_handle_reset con 0x55a68d34dc00 session 0x55a68fdf61c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 332 heartbeat osd_stat(store_statfs(0x4e889a000/0x0/0x4ffc00000, data 0x1341b4d8/0x135ee000, compress 0x0/0x0/0x0, omap 0x307a5, meta 0x3d3f85b), peers [0,1] op hist [0,0,0,0,1,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 332 ms_handle_reset con 0x55a68d34d000 session 0x55a68a9336c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689289 data_alloc: 234881024 data_used: 19398201
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 60645376 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 332 ms_handle_reset con 0x55a689eb2c00 session 0x55a68f8ae8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.316457748s of 10.133997917s, submitted: 144
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 332 ms_handle_reset con 0x55a68d34c400 session 0x55a68cdffdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 60637184 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 333 ms_handle_reset con 0x55a68d34d400 session 0x55a68a3fb6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 333 ms_handle_reset con 0x55a690ae2c00 session 0x55a68cdff6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 60604416 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 333 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b660000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 ms_handle_reset con 0x55a690ae2800 session 0x55a68a933500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 ms_handle_reset con 0x55a68d34c400 session 0x55a68a932fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 ms_handle_reset con 0x55a68d34c000 session 0x55a68c8df340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 60579840 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 60579840 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 ms_handle_reset con 0x55a690ae3800 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 ms_handle_reset con 0x55a68d34d400 session 0x55a68f93c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3695487 data_alloc: 234881024 data_used: 19398201
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157261824 unmapped: 48545792 heap: 205807616 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 heartbeat osd_stat(store_statfs(0x4e6c96000/0x0/0x4ffc00000, data 0x1501f266/0x151f6000, compress 0x0/0x0/0x0, omap 0x30f4c, meta 0x3d3f0b4), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,4,2])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 144752640 unmapped: 69451776 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 heartbeat osd_stat(store_statfs(0x4e5c96000/0x0/0x4ffc00000, data 0x1601f266/0x161f6000, compress 0x0/0x0/0x0, omap 0x30f4c, meta 0x3d3f0b4), peers [0,1] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 153378816 unmapped: 60825600 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146128896 unmapped: 68075520 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a689eb2c00 session 0x55a68b533c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150487040 unmapped: 63717376 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280531 data_alloc: 234881024 data_used: 19398786
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a68d34c000 session 0x55a68a3fa540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 154796032 unmapped: 59408384 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a68d34c400 session 0x55a68d0c6540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.400494576s of 10.244714737s, submitted: 99
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146464768 unmapped: 67739648 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a690ae3c00 session 0x55a68b63c000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a689eb2c00 session 0x55a68ce981c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a690ae2800 session 0x55a690092c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a68d34c000 session 0x55a68cdf2000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a68d34c400 session 0x55a68a8236c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 46399488 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a68d34d400 session 0x55a68a822000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a68d34d400 session 0x55a68a8016c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 heartbeat osd_stat(store_statfs(0x4e0c81000/0x0/0x4ffc00000, data 0x1b020e8c/0x1b1fb000, compress 0x0/0x0/0x0, omap 0x30f4c, meta 0x3d4f0b4), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 146989056 unmapped: 67215360 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 156917760 unmapped: 57286656 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 ms_handle_reset con 0x55a68d34c000 session 0x55a68ce99500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4792706 data_alloc: 234881024 data_used: 25622747
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 64503808 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 153968640 unmapped: 60235776 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 335 handle_osd_map epochs [335,336], i have 336, src has [1,336]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162471936 unmapped: 51732480 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162553856 unmapped: 51650560 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 336 heartbeat osd_stat(store_statfs(0x4d9eec000/0x0/0x4ffc00000, data 0x20c22ac0/0x20dfe000, compress 0x0/0x0/0x0, omap 0x310e8, meta 0x4edef18), peers [0,1] op hist [0,0,0,0,0,0,0,1,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 336 ms_handle_reset con 0x55a690ae3800 session 0x55a68b533a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150044672 unmapped: 64159744 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 336 ms_handle_reset con 0x55a690ae3000 session 0x55a68b63dc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 336 ms_handle_reset con 0x55a68d34d000 session 0x55a688dd3c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 336 ms_handle_reset con 0x55a690ae2800 session 0x55a68fdf7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5011132 data_alloc: 234881024 data_used: 25623997
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150183936 unmapped: 64020480 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 336 handle_osd_map epochs [337,337], i have 337, src has [1,337]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150216704 unmapped: 63987712 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.568481922s of 10.192760468s, submitted: 66
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 337 ms_handle_reset con 0x55a68d34c000 session 0x55a690093180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 338 ms_handle_reset con 0x55a68d34d400 session 0x55a68d0c7180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 338 ms_handle_reset con 0x55a690ae3000 session 0x55a68fcfba40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150241280 unmapped: 63963136 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 338 ms_handle_reset con 0x55a690ae2000 session 0x55a68c320a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 338 heartbeat osd_stat(store_statfs(0x4d8aea000/0x0/0x4ffc00000, data 0x2202618f/0x22202000, compress 0x0/0x0/0x0, omap 0x31284, meta 0x4eded7c), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 150241280 unmapped: 63963136 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 338 ms_handle_reset con 0x55a68d34c000 session 0x55a68b63d180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 339 ms_handle_reset con 0x55a68d34d400 session 0x55a68fdf7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 339 ms_handle_reset con 0x55a690ae3800 session 0x55a68f8afc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161021952 unmapped: 53182464 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 339 heartbeat osd_stat(store_statfs(0x4d86ec000/0x0/0x4ffc00000, data 0x226cdd90/0x225fe000, compress 0x0/0x0/0x0, omap 0x315da, meta 0x4edea26), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 ms_handle_reset con 0x55a690ae2800 session 0x55a68c8defc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5197374 data_alloc: 251658240 data_used: 27320634
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160653312 unmapped: 53551104 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 ms_handle_reset con 0x55a690ae3000 session 0x55a68cdff6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 ms_handle_reset con 0x55a68d34c000 session 0x55a68ceda1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160391168 unmapped: 53813248 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 heartbeat osd_stat(store_statfs(0x4d7dd5000/0x0/0x4ffc00000, data 0x2345d9bb/0x22ef9000, compress 0x0/0x0/0x0, omap 0x32b5f, meta 0x4edd4a1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 ms_handle_reset con 0x55a68d34d400 session 0x55a68b599340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 ms_handle_reset con 0x55a690ae2800 session 0x55a68b26ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160735232 unmapped: 53469184 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 ms_handle_reset con 0x55a690ae3800 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 341 heartbeat osd_stat(store_statfs(0x4d7dd0000/0x0/0x4ffc00000, data 0x2345f4ad/0x22efc000, compress 0x0/0x0/0x0, omap 0x32dfc, meta 0x4edd204), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160743424 unmapped: 53460992 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 341 handle_osd_map epochs [342,342], i have 342, src has [1,342]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a68d52bc00 session 0x55a68edd7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a68d34c000 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160759808 unmapped: 53444608 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5203703 data_alloc: 251658240 data_used: 28545338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160882688 unmapped: 53321728 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a68d34d400 session 0x55a68b63c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a690ae2800 session 0x55a690092a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a68d52b800 session 0x55a68b183180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a690ae3800 session 0x55a68b661180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a68d34c000 session 0x55a68c8df340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a68d34d400 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161824768 unmapped: 52379648 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.148654938s of 10.019592285s, submitted: 254
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a68d52b800 session 0x55a68c321180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 ms_handle_reset con 0x55a690ae2800 session 0x55a689e04700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161406976 unmapped: 52797440 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 342 heartbeat osd_stat(store_statfs(0x4d7650000/0x0/0x4ffc00000, data 0x23bfb11e/0x2369c000, compress 0x0/0x0/0x0, omap 0x3327b, meta 0x4edcd85), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 343 ms_handle_reset con 0x55a689eb2c00 session 0x55a68ce98540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 343 ms_handle_reset con 0x55a68d34c400 session 0x55a68a800700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 343 ms_handle_reset con 0x55a68d34c000 session 0x55a68cdf28c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161415168 unmapped: 52789248 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 343 heartbeat osd_stat(store_statfs(0x4d764b000/0x0/0x4ffc00000, data 0x23bfcbf4/0x2369f000, compress 0x0/0x0/0x0, omap 0x332b3, meta 0x4edcd4d), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 343 ms_handle_reset con 0x55a68d52b800 session 0x55a68b63da40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 344 ms_handle_reset con 0x55a690ae3800 session 0x55a68b189180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 344 ms_handle_reset con 0x55a690ae2800 session 0x55a68b533c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161947648 unmapped: 52256768 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 345 ms_handle_reset con 0x55a68d52ac00 session 0x55a68c8df340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 345 ms_handle_reset con 0x55a68d34d400 session 0x55a68ce988c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5325209 data_alloc: 251658240 data_used: 28553900
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161972224 unmapped: 52232192 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 345 ms_handle_reset con 0x55a68d34c400 session 0x55a68cdfe700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 345 ms_handle_reset con 0x55a68d34c000 session 0x55a68b660700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161972224 unmapped: 52232192 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 345 ms_handle_reset con 0x55a68d52b800 session 0x55a6900928c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 345 ms_handle_reset con 0x55a68a63f000 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 346 ms_handle_reset con 0x55a68d34c000 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 346 ms_handle_reset con 0x55a690ae2400 session 0x55a68c321a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 346 ms_handle_reset con 0x55a68d34c400 session 0x55a68ce99500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 155508736 unmapped: 58695680 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 346 ms_handle_reset con 0x55a68d34d400 session 0x55a68a823340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 346 heartbeat osd_stat(store_statfs(0x4d915f000/0x0/0x4ffc00000, data 0x219a427e/0x21b8d000, compress 0x0/0x0/0x0, omap 0x33dcb, meta 0x4edc235), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 155500544 unmapped: 58703872 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 347 ms_handle_reset con 0x55a68a63f000 session 0x55a68fdf7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 156557312 unmapped: 57647104 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 347 heartbeat osd_stat(store_statfs(0x4d9225000/0x0/0x4ffc00000, data 0x218daef1/0x21ac5000, compress 0x0/0x0/0x0, omap 0x34203, meta 0x4edbdfd), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4997642 data_alloc: 234881024 data_used: 20450293
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 347 ms_handle_reset con 0x55a68d34c000 session 0x55a68ceda1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 156565504 unmapped: 57638912 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 347 ms_handle_reset con 0x55a68d34c400 session 0x55a68b599340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a690ae2400 session 0x55a68b183c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 155860992 unmapped: 58343424 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 heartbeat osd_stat(store_statfs(0x4d9224000/0x0/0x4ffc00000, data 0x218dcb2e/0x21ac6000, compress 0x0/0x0/0x0, omap 0x34630, meta 0x4edb9d0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 155860992 unmapped: 58343424 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a68d52ac00 session 0x55a68edd7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a68a63f000 session 0x55a68f8ae380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 155860992 unmapped: 58343424 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 155860992 unmapped: 58343424 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.323131561s of 13.362374306s, submitted: 192
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a68d34c000 session 0x55a689e04fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a68d34c400 session 0x55a68b660e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a690ae2400 session 0x55a68cdf21c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a690ae3800 session 0x55a68a822000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5040040 data_alloc: 234881024 data_used: 20451193
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a68a63f000 session 0x55a68cdf2a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 heartbeat osd_stat(store_statfs(0x4d8cd1000/0x0/0x4ffc00000, data 0x21e31b2e/0x2201b000, compress 0x0/0x0/0x0, omap 0x34630, meta 0x4edb9d0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 156606464 unmapped: 57597952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 156606464 unmapped: 57597952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a68d34c000 session 0x55a68b5996c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a68d34c400 session 0x55a689e05340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 ms_handle_reset con 0x55a690ae2400 session 0x55a68b598380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 156606464 unmapped: 57597952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 349 ms_handle_reset con 0x55a68dbd0000 session 0x55a68d0ec1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 349 ms_handle_reset con 0x55a68a63f000 session 0x55a690505a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 156639232 unmapped: 57565184 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 349 ms_handle_reset con 0x55a68dbd0400 session 0x55a68c3201c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157597696 unmapped: 56606720 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5074921 data_alloc: 234881024 data_used: 24536953
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157728768 unmapped: 56475648 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 350 ms_handle_reset con 0x55a68dbd1c00 session 0x55a68f93ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 350 heartbeat osd_stat(store_statfs(0x4d8cc5000/0x0/0x4ffc00000, data 0x21e35319/0x22025000, compress 0x0/0x0/0x0, omap 0x34b71, meta 0x4edb48f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 56442880 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 350 ms_handle_reset con 0x55a68dbd1800 session 0x55a68b63c700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158023680 unmapped: 56180736 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 350 ms_handle_reset con 0x55a68dbd0800 session 0x55a68d0eddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 ms_handle_reset con 0x55a68a63f000 session 0x55a68a933a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 ms_handle_reset con 0x55a68dbd0400 session 0x55a68ce996c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 56573952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 heartbeat osd_stat(store_statfs(0x4d8cc4000/0x0/0x4ffc00000, data 0x21e36eee/0x22026000, compress 0x0/0x0/0x0, omap 0x34d0d, meta 0x4edb2f3), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 56573952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 ms_handle_reset con 0x55a68dbd1800 session 0x55a68edd6fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 ms_handle_reset con 0x55a68dbd1c00 session 0x55a68b26d6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5078797 data_alloc: 234881024 data_used: 24536953
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 56573952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 heartbeat osd_stat(store_statfs(0x4d8cc5000/0x0/0x4ffc00000, data 0x21e36ede/0x22025000, compress 0x0/0x0/0x0, omap 0x34d0d, meta 0x4edb2f3), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 56573952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 56573952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 56573952 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.792860985s of 13.992515564s, submitted: 77
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161103872 unmapped: 53100544 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 ms_handle_reset con 0x55a68d329c00 session 0x55a68cdf2e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5219587 data_alloc: 234881024 data_used: 25716089
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161341440 unmapped: 52862976 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161341440 unmapped: 52862976 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 heartbeat osd_stat(store_statfs(0x4d8296000/0x0/0x4ffc00000, data 0x22fddeee/0x22a56000, compress 0x0/0x0/0x0, omap 0x35425, meta 0x4edabdb), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 ms_handle_reset con 0x55a68dbd0400 session 0x55a68b533180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160374784 unmapped: 53829632 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 352 ms_handle_reset con 0x55a68dbd1c00 session 0x55a68ce98000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 352 heartbeat osd_stat(store_statfs(0x4d8291000/0x0/0x4ffc00000, data 0x22fdfb19/0x22a59000, compress 0x0/0x0/0x0, omap 0x3578c, meta 0x4eda874), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160374784 unmapped: 53829632 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 353 ms_handle_reset con 0x55a68dbd1800 session 0x55a68f93c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 353 ms_handle_reset con 0x55a68a63f000 session 0x55a68a8236c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 353 heartbeat osd_stat(store_statfs(0x4d828c000/0x0/0x4ffc00000, data 0x22fe170c/0x22a5c000, compress 0x0/0x0/0x0, omap 0x359bc, meta 0x4eda644), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 53821440 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5223699 data_alloc: 234881024 data_used: 25802121
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160391168 unmapped: 53813248 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 353 handle_osd_map epochs [353,354], i have 354, src has [1,354]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 354 ms_handle_reset con 0x55a68d328c00 session 0x55a68cdff6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160423936 unmapped: 53780480 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160423936 unmapped: 53780480 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 354 ms_handle_reset con 0x55a68a63f000 session 0x55a68c769c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 ms_handle_reset con 0x55a68dbd0400 session 0x55a68f93ca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160440320 unmapped: 53764096 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 heartbeat osd_stat(store_statfs(0x4d8289000/0x0/0x4ffc00000, data 0x22fe4fa6/0x22a61000, compress 0x0/0x0/0x0, omap 0x36042, meta 0x4ed9fbe), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 ms_handle_reset con 0x55a68d34c000 session 0x55a68fdf7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 ms_handle_reset con 0x55a68d34c400 session 0x55a68cedafc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160440320 unmapped: 53764096 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.258296967s of 10.698248863s, submitted: 141
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 ms_handle_reset con 0x55a690ae2400 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 ms_handle_reset con 0x55a68a63f000 session 0x55a68f8af500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5166651 data_alloc: 234881024 data_used: 21680578
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 56328192 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 heartbeat osd_stat(store_statfs(0x4d87df000/0x0/0x4ffc00000, data 0x22a8ff96/0x2250b000, compress 0x0/0x0/0x0, omap 0x36091, meta 0x4ed9f6f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 56328192 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 56328192 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 56328192 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 ms_handle_reset con 0x55a68d34c000 session 0x55a68b183a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 157876224 unmapped: 56328192 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 ms_handle_reset con 0x55a68dbd0400 session 0x55a68c769a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5283005 data_alloc: 234881024 data_used: 21680594
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 356 ms_handle_reset con 0x55a68dbd1800 session 0x55a690092c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 54337536 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 357 ms_handle_reset con 0x55a68dbd1c00 session 0x55a68c321dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 357 heartbeat osd_stat(store_statfs(0x4d7e4f000/0x0/0x4ffc00000, data 0x23c36912/0x22e99000, compress 0x0/0x0/0x0, omap 0x36b02, meta 0x4ed94fe), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 357 ms_handle_reset con 0x55a68a63f000 session 0x55a690092a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 357 ms_handle_reset con 0x55a68d34c400 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 357 ms_handle_reset con 0x55a68d34c000 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 54296576 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 358 ms_handle_reset con 0x55a68d329000 session 0x55a68cdffdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159801344 unmapped: 54403072 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 359 ms_handle_reset con 0x55a68dbd1800 session 0x55a68f8af340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 359 ms_handle_reset con 0x55a68d329400 session 0x55a690093500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 359 ms_handle_reset con 0x55a68a63f000 session 0x55a68f93d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 359 ms_handle_reset con 0x55a68dbd0400 session 0x55a68a933500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159801344 unmapped: 54403072 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159809536 unmapped: 54394880 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5321601 data_alloc: 234881024 data_used: 21680790
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.609950066s of 10.960894585s, submitted: 128
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159809536 unmapped: 54394880 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 360 ms_handle_reset con 0x55a68d329000 session 0x55a68ce98c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159850496 unmapped: 54353920 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 360 ms_handle_reset con 0x55a68d34c000 session 0x55a68cdfe700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 360 heartbeat osd_stat(store_statfs(0x4d8fed000/0x0/0x4ffc00000, data 0x22279e3b/0x21cfd000, compress 0x0/0x0/0x0, omap 0x3d5a6, meta 0x4ed2a5a), peers [0,1] op hist [0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 360 ms_handle_reset con 0x55a68a63f000 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 360 heartbeat osd_stat(store_statfs(0x4d8fed000/0x0/0x4ffc00000, data 0x22279e3b/0x21cfd000, compress 0x0/0x0/0x0, omap 0x3d5a6, meta 0x4ed2a5a), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159875072 unmapped: 54329344 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 361 ms_handle_reset con 0x55a68d329400 session 0x55a68c28c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 54296576 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 362 ms_handle_reset con 0x55a68d34c000 session 0x55a68cdf2c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 362 ms_handle_reset con 0x55a68dbd0400 session 0x55a688dd3c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 362 ms_handle_reset con 0x55a68d329000 session 0x55a68c321a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159932416 unmapped: 54272000 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 362 ms_handle_reset con 0x55a68d34c000 session 0x55a68b63da40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 362 ms_handle_reset con 0x55a68d329400 session 0x55a68cdff6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 362 heartbeat osd_stat(store_statfs(0x4d8fe3000/0x0/0x4ffc00000, data 0x2227d892/0x21d05000, compress 0x0/0x0/0x0, omap 0x3cc5a, meta 0x4ed33a6), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5164383 data_alloc: 234881024 data_used: 21682126
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 54255616 heap: 214204416 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 363 ms_handle_reset con 0x55a68a63f000 session 0x55a68a823340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 363 ms_handle_reset con 0x55a68d34cc00 session 0x55a68ceda1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 363 heartbeat osd_stat(store_statfs(0x4d83e2000/0x0/0x4ffc00000, data 0x22e7f505/0x22908000, compress 0x0/0x0/0x0, omap 0x3cbe8, meta 0x4ed3418), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,6,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189423616 unmapped: 33185792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161144832 unmapped: 61464576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 364 ms_handle_reset con 0x55a68ca93c00 session 0x55a68b599340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 364 ms_handle_reset con 0x55a68a63f000 session 0x55a68edd7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169795584 unmapped: 52813824 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 364 ms_handle_reset con 0x55a68ca93c00 session 0x55a68f8ae380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165675008 unmapped: 56934400 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 364 ms_handle_reset con 0x55a68d34c000 session 0x55a68a8236c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 364 ms_handle_reset con 0x55a68d329400 session 0x55a68cdf3340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5711219 data_alloc: 234881024 data_used: 21683735
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.224813461s of 10.009304047s, submitted: 123
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170000384 unmapped: 52609024 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 165879808 unmapped: 56729600 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 365 ms_handle_reset con 0x55a68b297000 session 0x55a68d0c6a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 365 heartbeat osd_stat(store_statfs(0x4d23d9000/0x0/0x4ffc00000, data 0x28e82cf9/0x28911000, compress 0x0/0x0/0x0, omap 0x3c7d2, meta 0x4ed382e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162021376 unmapped: 60588032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 366 ms_handle_reset con 0x55a68b619400 session 0x55a689e04700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 366 ms_handle_reset con 0x55a68a63f000 session 0x55a68d0ece00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 366 ms_handle_reset con 0x55a68d34cc00 session 0x55a68d0eddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166445056 unmapped: 56164352 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162570240 unmapped: 60039168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 366 handle_osd_map epochs [366,367], i have 367, src has [1,367]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 367 ms_handle_reset con 0x55a68ca93c00 session 0x55a68c769c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6386792 data_alloc: 234881024 data_used: 21683833
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 168026112 unmapped: 54583296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 367 ms_handle_reset con 0x55a68d329400 session 0x55a68ce996c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 367 ms_handle_reset con 0x55a68b618800 session 0x55a68cedafc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 168173568 unmapped: 54435840 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 367 heartbeat osd_stat(store_statfs(0x4c8bd3000/0x0/0x4ffc00000, data 0x326869f0/0x32117000, compress 0x0/0x0/0x0, omap 0x3c9ae, meta 0x4ed3652), peers [0,1] op hist [0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 367 ms_handle_reset con 0x55a68d34c400 session 0x55a68b182e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 368 ms_handle_reset con 0x55a68b619400 session 0x55a690092c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164282368 unmapped: 58327040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 368 ms_handle_reset con 0x55a68dbd0400 session 0x55a68c321180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 368 ms_handle_reset con 0x55a68a63f000 session 0x55a68f8af340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 368 ms_handle_reset con 0x55a68b618800 session 0x55a68b26d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 368 ms_handle_reset con 0x55a68b619400 session 0x55a68b661340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 368 ms_handle_reset con 0x55a68d34c400 session 0x55a68cdf3dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164380672 unmapped: 58228736 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 369 ms_handle_reset con 0x55a68dbd0400 session 0x55a68f93d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 369 ms_handle_reset con 0x55a68ca93c00 session 0x55a68cdf28c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 369 heartbeat osd_stat(store_statfs(0x4d8fd1000/0x0/0x4ffc00000, data 0x22289d28/0x21d19000, compress 0x0/0x0/0x0, omap 0x3c300, meta 0x4ed3d00), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162897920 unmapped: 59711488 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 370 ms_handle_reset con 0x55a68b618800 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 370 ms_handle_reset con 0x55a68b619400 session 0x55a68b533500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5272494 data_alloc: 234881024 data_used: 21683539
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.348814964s of 10.112453461s, submitted: 272
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 371 ms_handle_reset con 0x55a68ca93c00 session 0x55a68f93d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162930688 unmapped: 59678720 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 371 ms_handle_reset con 0x55a68dbd0400 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 371 ms_handle_reset con 0x55a68d34cc00 session 0x55a68b26d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 59613184 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 372 ms_handle_reset con 0x55a68d34c400 session 0x55a68d0c6e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 372 ms_handle_reset con 0x55a68b618800 session 0x55a68fcfb180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163012608 unmapped: 59596800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 373 ms_handle_reset con 0x55a68ca93c00 session 0x55a68c320a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 373 ms_handle_reset con 0x55a68dbd0400 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162004992 unmapped: 60604416 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 374 ms_handle_reset con 0x55a68b619400 session 0x55a68a800700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 374 ms_handle_reset con 0x55a68b618800 session 0x55a68c321180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161767424 unmapped: 60841984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 374 heartbeat osd_stat(store_statfs(0x4da172000/0x0/0x4ffc00000, data 0x2097290b/0x20b78000, compress 0x0/0x0/0x0, omap 0x3bfcf, meta 0x4ed4031), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5108494 data_alloc: 234881024 data_used: 19407177
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161775616 unmapped: 60833792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161775616 unmapped: 60833792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 376 ms_handle_reset con 0x55a68ca93c00 session 0x55a68cdf3340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 376 ms_handle_reset con 0x55a68d34c400 session 0x55a68b183c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162521088 unmapped: 60088320 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 377 ms_handle_reset con 0x55a68dbd0400 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 377 heartbeat osd_stat(store_statfs(0x4e8d6d000/0x0/0x4ffc00000, data 0x11d761cf/0x11f7d000, compress 0x0/0x0/0x0, omap 0x3c57d, meta 0x4ed3a83), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 377 ms_handle_reset con 0x55a68d34c000 session 0x55a690092c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 60596224 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 378 ms_handle_reset con 0x55a68b618800 session 0x55a68b188a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 378 ms_handle_reset con 0x55a68ca93c00 session 0x55a68b182e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159629312 unmapped: 62980096 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 379 ms_handle_reset con 0x55a68d34c400 session 0x55a689e05340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2670236 data_alloc: 234881024 data_used: 19408403
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159637504 unmapped: 62971904 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159637504 unmapped: 62971904 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.963670731s of 11.977459908s, submitted: 367
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159637504 unmapped: 62971904 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f85df000/0x0/0x4ffc00000, data 0x24fe6c3/0x270b000, compress 0x0/0x0/0x0, omap 0x3cc87, meta 0x4ed3379), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159637504 unmapped: 62971904 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159637504 unmapped: 62971904 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2675482 data_alloc: 234881024 data_used: 19409316
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 63496192 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 380 ms_handle_reset con 0x55a68dbd0400 session 0x55a689e04700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 380 ms_handle_reset con 0x55a68a7fec00 session 0x55a68cdfe700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 380 ms_handle_reset con 0x55a68c25a400 session 0x55a68c3201c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 380 heartbeat osd_stat(store_statfs(0x4f85dc000/0x0/0x4ffc00000, data 0x250028b/0x2710000, compress 0x0/0x0/0x0, omap 0x3c82a, meta 0x4ed37d6), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 63488000 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 380 ms_handle_reset con 0x55a68b618800 session 0x55a68f8aea80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 63488000 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 381 ms_handle_reset con 0x55a68ca93c00 session 0x55a68cdffdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 63479808 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 382 ms_handle_reset con 0x55a68d34c400 session 0x55a68b63ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 382 ms_handle_reset con 0x55a68a7fec00 session 0x55a68b63d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 63479808 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 382 heartbeat osd_stat(store_statfs(0x4f85d3000/0x0/0x4ffc00000, data 0x2503ad3/0x2717000, compress 0x0/0x0/0x0, omap 0x3cb3a, meta 0x4ed34c6), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2686041 data_alloc: 234881024 data_used: 19409430
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 63479808 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 383 ms_handle_reset con 0x55a68b618800 session 0x55a68f8ae380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 63479808 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 383 ms_handle_reset con 0x55a68ca93c00 session 0x55a68f8af6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 ms_handle_reset con 0x55a68c25a400 session 0x55a68fcfa1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 ms_handle_reset con 0x55a68d34c400 session 0x55a68edd6000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 63135744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f85ab000/0x0/0x4ffc00000, data 0x252b30b/0x273f000, compress 0x0/0x0/0x0, omap 0x3cf7e, meta 0x4ed3082), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 63135744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 63135744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2714059 data_alloc: 234881024 data_used: 23082063
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 63135744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 ms_handle_reset con 0x55a68dbd0400 session 0x55a68a9336c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.855018616s of 13.452609062s, submitted: 74
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 ms_handle_reset con 0x55a68a7ff800 session 0x55a68f93da40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 ms_handle_reset con 0x55a68b618800 session 0x55a68c8defc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 63135744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 heartbeat osd_stat(store_statfs(0x4f85d1000/0x0/0x4ffc00000, data 0x250730b/0x271b000, compress 0x0/0x0/0x0, omap 0x3cf72, meta 0x4ed308e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 63135744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 384 handle_osd_map epochs [384,385], i have 385, src has [1,385]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 63135744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 ms_handle_reset con 0x55a68c25a400 session 0x55a6905041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 ms_handle_reset con 0x55a68ca93c00 session 0x55a68fcfba40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159490048 unmapped: 63119360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2715179 data_alloc: 234881024 data_used: 23080726
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 ms_handle_reset con 0x55a68ce8e400 session 0x55a68d0ed340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159498240 unmapped: 63111168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 ms_handle_reset con 0x55a68d34c400 session 0x55a68b661880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 ms_handle_reset con 0x55a68a7ff800 session 0x55a68cedafc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 ms_handle_reset con 0x55a68b618800 session 0x55a68a3fa8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159498240 unmapped: 63111168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 ms_handle_reset con 0x55a68ca93c00 session 0x55a68a801340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 heartbeat osd_stat(store_statfs(0x4f85cb000/0x0/0x4ffc00000, data 0x2508e7f/0x2721000, compress 0x0/0x0/0x0, omap 0x3c448, meta 0x4ed3bb8), peers [0,1] op hist [0,0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159506432 unmapped: 63102976 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 386 ms_handle_reset con 0x55a68c25a400 session 0x55a68a8228c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 386 ms_handle_reset con 0x55a68b618800 session 0x55a68a822540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 387 ms_handle_reset con 0x55a68ce8e400 session 0x55a68a800fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159522816 unmapped: 63086592 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 388 ms_handle_reset con 0x55a68a7ff800 session 0x55a68b63ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 388 ms_handle_reset con 0x55a68ca93c00 session 0x55a68a823340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 388 ms_handle_reset con 0x55a68ce8ec00 session 0x55a68fdf7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 159588352 unmapped: 63021056 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2689513 data_alloc: 234881024 data_used: 19410884
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 63717376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 388 ms_handle_reset con 0x55a68a7ff800 session 0x55a68f93c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 388 ms_handle_reset con 0x55a68d34c400 session 0x55a688dd3c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 388 heartbeat osd_stat(store_statfs(0x4f8945000/0x0/0x4ffc00000, data 0x218b586/0x23a4000, compress 0x0/0x0/0x0, omap 0x3c7c5, meta 0x4ed383b), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 63717376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.038699627s of 10.818477631s, submitted: 115
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 389 ms_handle_reset con 0x55a68b618800 session 0x55a68a9336c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 63717376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 390 ms_handle_reset con 0x55a68ca93c00 session 0x55a68c8defc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 63717376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 390 ms_handle_reset con 0x55a68ce8e400 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 390 ms_handle_reset con 0x55a68a7ff800 session 0x55a68b532a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 390 ms_handle_reset con 0x55a68ca93c00 session 0x55a68ce996c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 158507008 unmapped: 64102400 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 391 heartbeat osd_stat(store_statfs(0x4f893d000/0x0/0x4ffc00000, data 0x218eda4/0x23aa000, compress 0x0/0x0/0x0, omap 0x3c9a1, meta 0x4ed365f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 391 ms_handle_reset con 0x55a68b618800 session 0x55a68cedb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 391 ms_handle_reset con 0x55a68d34c400 session 0x55a68c8df500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2699361 data_alloc: 234881024 data_used: 19415767
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160636928 unmapped: 61972480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160636928 unmapped: 61972480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 391 ms_handle_reset con 0x55a68ce8ec00 session 0x55a68c28c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160636928 unmapped: 61972480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 392 ms_handle_reset con 0x55a68ca93c00 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 392 ms_handle_reset con 0x55a68a7ff800 session 0x55a68cdffc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 166699008 unmapped: 55910400 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 392 heartbeat osd_stat(store_statfs(0x4f7799000/0x0/0x4ffc00000, data 0x21926b0/0x23b1000, compress 0x0/0x0/0x0, omap 0x3cf20, meta 0x60730e0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 392 ms_handle_reset con 0x55a68d34c400 session 0x55a68d0eca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 393 ms_handle_reset con 0x55a68b618800 session 0x55a68a3fa1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 393 ms_handle_reset con 0x55a68ca90800 session 0x55a68cdfe1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161144832 unmapped: 61464576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2754416 data_alloc: 234881024 data_used: 19416352
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161144832 unmapped: 61464576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 394 ms_handle_reset con 0x55a68a7ff800 session 0x55a68d0ed880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 394 ms_handle_reset con 0x55a68b618800 session 0x55a68ce98540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 61382656 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.315821648s of 10.115729332s, submitted: 113
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 394 ms_handle_reset con 0x55a68ca93c00 session 0x55a68a822fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 61382656 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161234944 unmapped: 61374464 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 395 ms_handle_reset con 0x55a68ca91400 session 0x55a68edd6000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161251328 unmapped: 61358080 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 395 heartbeat osd_stat(store_statfs(0x4f7055000/0x0/0x4ffc00000, data 0x28d4bbd/0x2af7000, compress 0x0/0x0/0x0, omap 0x3d8da, meta 0x6072726), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 396 ms_handle_reset con 0x55a68cf05000 session 0x55a68cdfe700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2763690 data_alloc: 234881024 data_used: 19416352
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161251328 unmapped: 61358080 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 397 ms_handle_reset con 0x55a68f7a8000 session 0x55a68f8af340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 397 ms_handle_reset con 0x55a68a7ff800 session 0x55a690093880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 397 ms_handle_reset con 0x55a68d34c400 session 0x55a68f8af180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 161267712 unmapped: 61341696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 62308352 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 ms_handle_reset con 0x55a68b618800 session 0x55a68cdfe700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 ms_handle_reset con 0x55a68ca91400 session 0x55a68cedafc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 62251008 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f7048000/0x0/0x4ffc00000, data 0x28da621/0x2b02000, compress 0x0/0x0/0x0, omap 0x3e06d, meta 0x6071f93), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 ms_handle_reset con 0x55a68a7ff800 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 62251008 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2773193 data_alloc: 234881024 data_used: 19417606
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 62251008 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 ms_handle_reset con 0x55a68b618800 session 0x55a68b598540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 62251008 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.814270020s of 10.025816917s, submitted: 82
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 ms_handle_reset con 0x55a68f7a8000 session 0x55a68cdf2a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f704a000/0x0/0x4ffc00000, data 0x28da621/0x2b02000, compress 0x0/0x0/0x0, omap 0x3e3b1, meta 0x6071c4f), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 62251008 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 398 handle_osd_map epochs [398,399], i have 399, src has [1,399]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 399 ms_handle_reset con 0x55a68b5df400 session 0x55a68c8df180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160620544 unmapped: 61988864 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 400 ms_handle_reset con 0x55a68a7d1400 session 0x55a68cdf3340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 400 ms_handle_reset con 0x55a68d34c400 session 0x55a68cdfe1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160628736 unmapped: 61980672 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 401 ms_handle_reset con 0x55a68a7ff800 session 0x55a690092e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 401 ms_handle_reset con 0x55a68b5df400 session 0x55a68c7d7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2783866 data_alloc: 234881024 data_used: 19418118
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 160686080 unmapped: 61923328 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f703a000/0x0/0x4ffc00000, data 0x28dfae8/0x2b0c000, compress 0x0/0x0/0x0, omap 0x3e7ed, meta 0x6071813), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 401 ms_handle_reset con 0x55a68b618800 session 0x55a68d0c6e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163192832 unmapped: 59416576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163192832 unmapped: 59416576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f703f000/0x0/0x4ffc00000, data 0x28dfb4a/0x2b0d000, compress 0x0/0x0/0x0, omap 0x43a30, meta 0x606c5d0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 402 ms_handle_reset con 0x55a68ca97800 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 402 ms_handle_reset con 0x55a68a7d0800 session 0x55a68b63c700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 402 ms_handle_reset con 0x55a68a7ff800 session 0x55a68b660700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163340288 unmapped: 59269120 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 403 ms_handle_reset con 0x55a68b61e400 session 0x55a688dd3c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 403 ms_handle_reset con 0x55a68c25b800 session 0x55a68b189dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 403 ms_handle_reset con 0x55a68c395c00 session 0x55a68fcfbc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 403 ms_handle_reset con 0x55a68f7a8000 session 0x55a68d0ece00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 163348480 unmapped: 59260928 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 404 ms_handle_reset con 0x55a68a7d0800 session 0x55a68cedb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 404 ms_handle_reset con 0x55a68a7ff800 session 0x55a68d0ec700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2832904 data_alloc: 234881024 data_used: 25796363
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164438016 unmapped: 58171392 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 405 ms_handle_reset con 0x55a68b61e400 session 0x55a68fdf7dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164438016 unmapped: 58171392 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.667203903s of 10.058185577s, submitted: 135
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164438016 unmapped: 58171392 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 405 heartbeat osd_stat(store_statfs(0x4f7036000/0x0/0x4ffc00000, data 0x28e69cf/0x2b16000, compress 0x0/0x0/0x0, omap 0x430ac, meta 0x606cf54), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 406 ms_handle_reset con 0x55a68c25b800 session 0x55a68cdfea80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 406 ms_handle_reset con 0x55a68a7d0800 session 0x55a68d0ecc40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164470784 unmapped: 58138624 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164470784 unmapped: 58138624 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 407 ms_handle_reset con 0x55a68a7ff800 session 0x55a68b26c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 407 ms_handle_reset con 0x55a68b61e400 session 0x55a68d0ec380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2834560 data_alloc: 234881024 data_used: 25795621
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 407 ms_handle_reset con 0x55a68f7a8000 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f7031000/0x0/0x4ffc00000, data 0x28e9d76/0x2b19000, compress 0x0/0x0/0x0, omap 0x427be, meta 0x606d842), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 164642816 unmapped: 57966592 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 407 ms_handle_reset con 0x55a68c395c00 session 0x55a68b26ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170016768 unmapped: 52592640 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170377216 unmapped: 52232192 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 409 ms_handle_reset con 0x55a68a7d0800 session 0x55a68edd7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 52969472 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 52969472 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2880456 data_alloc: 251658240 data_used: 27569189
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 52969472 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f6c32000/0x0/0x4ffc00000, data 0x2ce8175/0x2f18000, compress 0x0/0x0/0x0, omap 0x42abe, meta 0x606d542), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169656320 unmapped: 52953088 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.524926186s of 10.005352020s, submitted: 178
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169664512 unmapped: 52944896 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 410 ms_handle_reset con 0x55a68a7ff800 session 0x55a68a3fb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169664512 unmapped: 52944896 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 410 ms_handle_reset con 0x55a68b61e400 session 0x55a68fcfb180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169664512 unmapped: 52944896 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892738 data_alloc: 251658240 data_used: 27582090
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 410 heartbeat osd_stat(store_statfs(0x4f6bee000/0x0/0x4ffc00000, data 0x2d29d3c/0x2f5e000, compress 0x0/0x0/0x0, omap 0x42dd3, meta 0x606d22d), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169664512 unmapped: 52944896 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 410 ms_handle_reset con 0x55a68b296000 session 0x55a68c8df500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169672704 unmapped: 52936704 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 411 ms_handle_reset con 0x55a68b61f400 session 0x55a68a932540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169680896 unmapped: 52928512 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f6be8000/0x0/0x4ffc00000, data 0x2d2b9ad/0x2f62000, compress 0x0/0x0/0x0, omap 0x43085, meta 0x606cf7b), peers [0,1] op hist [0,1,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 412 ms_handle_reset con 0x55a68a63ec00 session 0x55a690093c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 412 ms_handle_reset con 0x55a68a7d0800 session 0x55a68a823180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 412 ms_handle_reset con 0x55a68f7a8000 session 0x55a689e04c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169697280 unmapped: 52912128 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 413 ms_handle_reset con 0x55a68a7ff800 session 0x55a68a3fa1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169705472 unmapped: 52903936 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 ms_handle_reset con 0x55a68b296000 session 0x55a68b533c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 ms_handle_reset con 0x55a68a63ec00 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2910338 data_alloc: 251658240 data_used: 27582090
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169738240 unmapped: 52871168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169738240 unmapped: 52871168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f6bdf000/0x0/0x4ffc00000, data 0x2d30f4f/0x2f6b000, compress 0x0/0x0/0x0, omap 0x4814f, meta 0x6067eb1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169738240 unmapped: 52871168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169754624 unmapped: 52854784 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 169754624 unmapped: 52854784 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.098485947s of 13.431160927s, submitted: 99
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 ms_handle_reset con 0x55a68a7d0800 session 0x55a68d0ed340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 ms_handle_reset con 0x55a68a7ff800 session 0x55a68b189dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2906707 data_alloc: 251658240 data_used: 27580146
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f6be1000/0x0/0x4ffc00000, data 0x2d30f4f/0x2f6b000, compress 0x0/0x0/0x0, omap 0x4814f, meta 0x6067eb1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 174047232 unmapped: 48562176 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 ms_handle_reset con 0x55a68f7a8000 session 0x55a68b1881c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 ms_handle_reset con 0x55a68b296000 session 0x55a68cdfe540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 52346880 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 ms_handle_reset con 0x55a68a63ec00 session 0x55a68c28ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f5020000/0x0/0x4ffc00000, data 0x48f0f26/0x4b2c000, compress 0x0/0x0/0x0, omap 0x4869e, meta 0x6067962), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 52330496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f3a27000/0x0/0x4ffc00000, data 0x5ee9f5e/0x6125000, compress 0x0/0x0/0x0, omap 0x4869e, meta 0x6067962), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 52330496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3a27000/0x0/0x4ffc00000, data 0x5ee9f5e/0x6125000, compress 0x0/0x0/0x0, omap 0x4869e, meta 0x6067962), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 52330496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184700 data_alloc: 251658240 data_used: 27584143
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 52330496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a7d0800 session 0x55a68b63d180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a7ff800 session 0x55a68a932c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 52330496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68f7a8000 session 0x55a68f93cfc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68b61e400 session 0x55a68fcfba40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3a21000/0x0/0x4ffc00000, data 0x5eeba7c/0x6129000, compress 0x0/0x0/0x0, omap 0x48782, meta 0x606787e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 52330496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170278912 unmapped: 52330496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170303488 unmapped: 52305920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185554 data_alloc: 251658240 data_used: 27637903
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a7ff800 session 0x55a68b661880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170303488 unmapped: 52305920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3a23000/0x0/0x4ffc00000, data 0x5eeba7c/0x6129000, compress 0x0/0x0/0x0, omap 0x48782, meta 0x606787e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170303488 unmapped: 52305920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3a23000/0x0/0x4ffc00000, data 0x5eeba7c/0x6129000, compress 0x0/0x0/0x0, omap 0x48782, meta 0x606787e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68b61e400 session 0x55a68c8df340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3a23000/0x0/0x4ffc00000, data 0x5eeba7c/0x6129000, compress 0x0/0x0/0x0, omap 0x48782, meta 0x606787e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68f7a8000 session 0x55a68cdf2a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.404269218s of 12.390002251s, submitted: 101
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68c36e000 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 52273152 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 52273152 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 52273152 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3a1f000/0x0/0x4ffc00000, data 0x61f6aaf/0x612d000, compress 0x0/0x0/0x0, omap 0x48a19, meta 0x60675e7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3218016 data_alloc: 251658240 data_used: 27642015
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 52273152 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 170336256 unmapped: 52273152 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 46039040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 177651712 unmapped: 44957696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 177651712 unmapped: 44957696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3a1f000/0x0/0x4ffc00000, data 0x61f6aaf/0x612d000, compress 0x0/0x0/0x0, omap 0x48a19, meta 0x60675e7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3261152 data_alloc: 251658240 data_used: 34998431
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 177659904 unmapped: 44949504 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 42541056 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 180297728 unmapped: 42311680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 180297728 unmapped: 42311680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f39d8000/0x0/0x4ffc00000, data 0x623daaf/0x6174000, compress 0x0/0x0/0x0, omap 0x48a19, meta 0x60675e7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 180297728 unmapped: 42311680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3274087 data_alloc: 251658240 data_used: 38688927
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 180297728 unmapped: 42311680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 180297728 unmapped: 42311680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68b20dc00 session 0x55a68a3fba40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.926557541s of 15.023276329s, submitted: 17
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 40378368 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 40263680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a7ff800 session 0x55a68a933180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68b20dc00 session 0x55a68ce988c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194666496 unmapped: 27942912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68b61e400 session 0x55a68b63da40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3317160 data_alloc: 251658240 data_used: 39260319
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f33cb000/0x0/0x4ffc00000, data 0x62aaaaf/0x61e1000, compress 0x0/0x0/0x0, omap 0x48c18, meta 0x60673e8), peers [0,1] op hist [0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191348736 unmapped: 31260672 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191758336 unmapped: 30851072 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191758336 unmapped: 30851072 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f3186000/0x0/0x4ffc00000, data 0x6a2faaf/0x6966000, compress 0x0/0x0/0x0, omap 0x48c18, meta 0x60673e8), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68c36e000 session 0x55a68b63d500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191758336 unmapped: 30851072 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191807488 unmapped: 30801920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68f7a8000 session 0x55a68b26c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3330906 data_alloc: 251658240 data_used: 39264415
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191807488 unmapped: 30801920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a7ff800 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191930368 unmapped: 30679040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191930368 unmapped: 30679040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191930368 unmapped: 30679040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f31e3000/0x0/0x4ffc00000, data 0x6a31aaf/0x6968000, compress 0x0/0x0/0x0, omap 0x48c18, meta 0x60673e8), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191930368 unmapped: 30679040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3330901 data_alloc: 251658240 data_used: 39264415
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191930368 unmapped: 30679040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191930368 unmapped: 30679040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191930368 unmapped: 30679040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a63ec00 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.487350464s of 15.981354713s, submitted: 133
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a7d0800 session 0x55a68fcfbc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191938560 unmapped: 30670848 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68b20dc00 session 0x55a68a800fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191938560 unmapped: 30670848 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f31e5000/0x0/0x4ffc00000, data 0x6a31a9f/0x6967000, compress 0x0/0x0/0x0, omap 0x48c18, meta 0x60673e8), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68b61e400 session 0x55a68c8df180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3327810 data_alloc: 251658240 data_used: 39264415
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a63ec00 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191963136 unmapped: 30646272 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 ms_handle_reset con 0x55a68a7d0800 session 0x55a6905056c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191963136 unmapped: 30646272 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 ms_handle_reset con 0x55a68ca93c00 session 0x55a68b189180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 ms_handle_reset con 0x55a68f7a9400 session 0x55a68f8ae700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188325888 unmapped: 34283520 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 ms_handle_reset con 0x55a68a7ff800 session 0x55a68d0c7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188350464 unmapped: 34258944 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188366848 unmapped: 34242560 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f3299000/0x0/0x4ffc00000, data 0x66726d6/0x68b1000, compress 0x0/0x0/0x0, omap 0x49268, meta 0x6066d98), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 ms_handle_reset con 0x55a68b20d800 session 0x55a68f93d500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170811 data_alloc: 251658240 data_used: 27360927
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183345152 unmapped: 39264256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183345152 unmapped: 39264256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f3dd3000/0x0/0x4ffc00000, data 0x5b3a6d6/0x5d79000, compress 0x0/0x0/0x0, omap 0x49268, meta 0x6066d98), peers [0,1] op hist [0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183345152 unmapped: 39264256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 ms_handle_reset con 0x55a68f7a4000 session 0x55a68ce98c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 417 ms_handle_reset con 0x55a68d447400 session 0x55a68edd7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.327075005s of 10.244176865s, submitted: 93
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183353344 unmapped: 39256064 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 418 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68cedb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 418 ms_handle_reset con 0x55a68d325c00 session 0x55a68edd6fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183410688 unmapped: 39198720 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3173727 data_alloc: 251658240 data_used: 27356815
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 39190528 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 39190528 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f3dcd000/0x0/0x4ffc00000, data 0x5b3dd74/0x5d7a000, compress 0x0/0x0/0x0, omap 0x489a5, meta 0x606765b), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 39190528 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f3dcd000/0x0/0x4ffc00000, data 0x5b3dd74/0x5d7a000, compress 0x0/0x0/0x0, omap 0x489a5, meta 0x606765b), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 39190528 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183418880 unmapped: 39190528 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 418 ms_handle_reset con 0x55a68b20d800 session 0x55a68b63ca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3175253 data_alloc: 251658240 data_used: 28409452
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 419 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68b188000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183877632 unmapped: 38731776 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183877632 unmapped: 38731776 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 419 ms_handle_reset con 0x55a68d325c00 session 0x55a68a3fb880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f3dce000/0x0/0x4ffc00000, data 0x5b3f993/0x5d7e000, compress 0x0/0x0/0x0, omap 0x48b50, meta 0x60674b0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 419 ms_handle_reset con 0x55a68d447400 session 0x55a690093880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183885824 unmapped: 38723584 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183885824 unmapped: 38723584 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 ms_handle_reset con 0x55a68f7a4000 session 0x55a6905041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183885824 unmapped: 38723584 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.234514236s of 11.211992264s, submitted: 88
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 ms_handle_reset con 0x55a68b20d800 session 0x55a68ce981c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3181514 data_alloc: 251658240 data_used: 28409724
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68b533c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183894016 unmapped: 38715392 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 ms_handle_reset con 0x55a68d447400 session 0x55a68a932c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f3dcb000/0x0/0x4ffc00000, data 0x5b41469/0x5d81000, compress 0x0/0x0/0x0, omap 0x48e6e, meta 0x6067192), peers [0,1] op hist [0,0,0,0,1,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 ms_handle_reset con 0x55a68d325c00 session 0x55a68f8af6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 ms_handle_reset con 0x55a68b61e800 session 0x55a68fcfaa80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184213504 unmapped: 38395904 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 421 ms_handle_reset con 0x55a68c36f400 session 0x55a68f93cfc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184221696 unmapped: 38387712 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 421 ms_handle_reset con 0x55a68b20d800 session 0x55a68cedafc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 421 ms_handle_reset con 0x55a68b61e800 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 38354944 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f3824000/0x0/0x4ffc00000, data 0x60e50b0/0x6326000, compress 0x0/0x0/0x0, omap 0x4919d, meta 0x6066e63), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 38354944 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f3824000/0x0/0x4ffc00000, data 0x60e50b0/0x6326000, compress 0x0/0x0/0x0, omap 0x4919d, meta 0x6066e63), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3223799 data_alloc: 251658240 data_used: 28409724
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 38354944 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f3824000/0x0/0x4ffc00000, data 0x60e50b0/0x6326000, compress 0x0/0x0/0x0, omap 0x4919d, meta 0x6066e63), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 38354944 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 38354944 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 38354944 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68a801180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68d325c00 session 0x55a68b183a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183320576 unmapped: 39288832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68b61e800 session 0x55a68a800700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3226573 data_alloc: 251658240 data_used: 28409724
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183320576 unmapped: 39288832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68c36f400 session 0x55a68ce98000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.464663506s of 11.136178970s, submitted: 71
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68cdb8c00 session 0x55a690093180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183328768 unmapped: 39280640 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3823000/0x0/0x4ffc00000, data 0x60e6b86/0x6329000, compress 0x0/0x0/0x0, omap 0x48d55, meta 0x60672ab), peers [0,1] op hist [1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3823000/0x0/0x4ffc00000, data 0x60e6b86/0x6329000, compress 0x0/0x0/0x0, omap 0x48d55, meta 0x60672ab), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3262998 data_alloc: 251658240 data_used: 34301836
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.3 total, 600.0 interval#012Cumulative writes: 18K writes, 80K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 6141 syncs, 3.06 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 26.74 MB, 0.04 MB/s#012Interval WAL: 10K writes, 4119 syncs, 2.46 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3823000/0x0/0x4ffc00000, data 0x60e6b86/0x6329000, compress 0x0/0x0/0x0, omap 0x48d55, meta 0x60672ab), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3262998 data_alloc: 251658240 data_used: 34301836
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183386112 unmapped: 39223296 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.106136322s of 11.114269257s, submitted: 6
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187809792 unmapped: 34799616 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f344a000/0x0/0x4ffc00000, data 0x64bfb86/0x6702000, compress 0x0/0x0/0x0, omap 0x48d55, meta 0x60672ab), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188022784 unmapped: 34586624 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188022784 unmapped: 34586624 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3309534 data_alloc: 251658240 data_used: 35981196
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188022784 unmapped: 34586624 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188022784 unmapped: 34586624 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34480128 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3126000/0x0/0x4ffc00000, data 0x67e3b86/0x6a26000, compress 0x0/0x0/0x0, omap 0x48d55, meta 0x60672ab), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34480128 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34480128 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3309422 data_alloc: 251658240 data_used: 35978124
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34480128 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34480128 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 34480128 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.751027107s of 11.030795097s, submitted: 66
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188153856 unmapped: 34455552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68d447400 session 0x55a68c28c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68c98d400 session 0x55a689e04e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3126000/0x0/0x4ffc00000, data 0x67e3b86/0x6a26000, compress 0x0/0x0/0x0, omap 0x48d55, meta 0x60672ab), peers [0,1] op hist [0,0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68b61e800 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3308534 data_alloc: 251658240 data_used: 35982220
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3126000/0x0/0x4ffc00000, data 0x67e3b86/0x6a26000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3308534 data_alloc: 251658240 data_used: 35982220
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188252160 unmapped: 34357248 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 35397632 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 35397632 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 35397632 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68c36f400 session 0x55a68edd6e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3126000/0x0/0x4ffc00000, data 0x67e3b86/0x6a26000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3308246 data_alloc: 251658240 data_used: 35982220
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 35397632 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68b63ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68d447400 session 0x55a68f93d6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 35397632 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68d4e7c00 session 0x55a68b598540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 35397632 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 35397632 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68b20d800 session 0x55a68edd7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.249162674s of 16.714628220s, submitted: 21
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187588608 unmapped: 35020800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68caa9c00 session 0x55a68d0ec700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68ce98000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3315342 data_alloc: 251658240 data_used: 37722508
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3126000/0x0/0x4ffc00000, data 0x67e3b86/0x6a26000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187588608 unmapped: 35020800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187588608 unmapped: 35020800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187588608 unmapped: 35020800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187588608 unmapped: 35020800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187588608 unmapped: 35020800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3315342 data_alloc: 251658240 data_used: 37722508
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187588608 unmapped: 35020800 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f3126000/0x0/0x4ffc00000, data 0x67e3b86/0x6a26000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68d447400 session 0x55a68c28ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 37093376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6969000/0x0/0x4ffc00000, data 0x2e65b24/0x30a7000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 37093376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 37093376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6969000/0x0/0x4ffc00000, data 0x2e65b24/0x30a7000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 37093376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2968534 data_alloc: 251658240 data_used: 27956620
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 37093376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 37093376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 37093376 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68b5df400 session 0x55a68cdf2fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68b20d800 session 0x55a68c28c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185712640 unmapped: 36896768 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185712640 unmapped: 36896768 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6969000/0x0/0x4ffc00000, data 0x2e65b24/0x30a7000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2969558 data_alloc: 251658240 data_used: 27956620
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185712640 unmapped: 36896768 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185712640 unmapped: 36896768 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185712640 unmapped: 36896768 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6969000/0x0/0x4ffc00000, data 0x2e65b24/0x30a7000, compress 0x0/0x0/0x0, omap 0x48641, meta 0x60679bf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.550436020s of 18.770658493s, submitted: 29
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185712640 unmapped: 36896768 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185745408 unmapped: 36864000 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2972738 data_alloc: 251658240 data_used: 27960618
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185835520 unmapped: 36773888 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6aa4000/0x0/0x4ffc00000, data 0x2e65b34/0x30a8000, compress 0x0/0x0/0x0, omap 0x4d1ad, meta 0x6062e53), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68b5df400 session 0x55a68cdfe380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6aa4000/0x0/0x4ffc00000, data 0x2e65b34/0x30a8000, compress 0x0/0x0/0x0, omap 0x4d1ad, meta 0x6062e53), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6aa4000/0x0/0x4ffc00000, data 0x2e65b34/0x30a8000, compress 0x0/0x0/0x0, omap 0x4d1ad, meta 0x6062e53), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2972371 data_alloc: 251658240 data_used: 27960618
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6aa4000/0x0/0x4ffc00000, data 0x2e65b34/0x30a8000, compress 0x0/0x0/0x0, omap 0x4d1ad, meta 0x6062e53), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2972371 data_alloc: 251658240 data_used: 27960618
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f6aa4000/0x0/0x4ffc00000, data 0x2e65b34/0x30a8000, compress 0x0/0x0/0x0, omap 0x4d1ad, meta 0x6062e53), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 36765696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.274096489s of 14.163348198s, submitted: 112
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68caa9c00 session 0x55a68b533c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68ce996c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190046208 unmapped: 32563200 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68d447400 session 0x55a68c320e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185851904 unmapped: 36757504 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 ms_handle_reset con 0x55a68b20d800 session 0x55a68cdf2e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 422 handle_osd_map epochs [422,423], i have 423, src has [1,423]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b5df400 session 0x55a68fcfa1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4ea4000/0x0/0x4ffc00000, data 0x4a65b34/0x4ca8000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3130885 data_alloc: 251658240 data_used: 28898618
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68caa9c00 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4e9d000/0x0/0x4ffc00000, data 0x4a67799/0x4cad000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3132637 data_alloc: 251658240 data_used: 28898618
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4e9d000/0x0/0x4ffc00000, data 0x4a67799/0x4cad000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a6916e4000 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b20d800 session 0x55a68c28ca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 36749312 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4e9d000/0x0/0x4ffc00000, data 0x4a67799/0x4cad000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b5df400 session 0x55a68c28c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.609284401s of 13.047460556s, submitted: 29
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3136583 data_alloc: 251658240 data_used: 28898618
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68caa9c00 session 0x55a690093180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 186163200 unmapped: 36446208 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b5dc800 session 0x55a68c28ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 186163200 unmapped: 36446208 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 186441728 unmapped: 36167680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 186449920 unmapped: 36159488 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3179280 data_alloc: 251658240 data_used: 35906476
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4e79000/0x0/0x4ffc00000, data 0x4a8b7cc/0x4cd3000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4e79000/0x0/0x4ffc00000, data 0x4a8b7cc/0x4cd3000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3179280 data_alloc: 251658240 data_used: 35906476
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4e79000/0x0/0x4ffc00000, data 0x4a8b7cc/0x4cd3000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187965440 unmapped: 34643968 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.005826950s of 13.029021263s, submitted: 6
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189325312 unmapped: 33284096 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 204496896 unmapped: 18112512 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4c2b000/0x0/0x4ffc00000, data 0x4cd97cc/0x4f21000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3216532 data_alloc: 251658240 data_used: 37217708
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f49e3000/0x0/0x4ffc00000, data 0x4f217cc/0x5169000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3225176 data_alloc: 251658240 data_used: 37361068
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f49e3000/0x0/0x4ffc00000, data 0x4f217cc/0x5169000, compress 0x0/0x0/0x0, omap 0x4c9d9, meta 0x6063627), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197378048 unmapped: 25231360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.928483009s of 10.753151894s, submitted: 68
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197410816 unmapped: 25198592 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68cdb8c00 session 0x55a68fcfaa80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a6916e4000 session 0x55a68cdff500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b20d800 session 0x55a68c28d180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3242761 data_alloc: 251658240 data_used: 37236140
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4a03000/0x0/0x4ffc00000, data 0x52087bc/0x5149000, compress 0x0/0x0/0x0, omap 0x4ca6d, meta 0x6063593), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68d324800 session 0x55a68d0c6380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b621000 session 0x55a68d0ec1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197410816 unmapped: 25198592 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b5dc800 session 0x55a689e05180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197156864 unmapped: 25452544 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197156864 unmapped: 25452544 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4a03000/0x0/0x4ffc00000, data 0x5208799/0x5148000, compress 0x0/0x0/0x0, omap 0x4ca6d, meta 0x6063593), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 25444352 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4a03000/0x0/0x4ffc00000, data 0x5208799/0x5148000, compress 0x0/0x0/0x0, omap 0x4ca6d, meta 0x6063593), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b20d800 session 0x55a68b26c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 25444352 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68b621000 session 0x55a68a3fb180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f4a03000/0x0/0x4ffc00000, data 0x5208799/0x5148000, compress 0x0/0x0/0x0, omap 0x4ca6d, meta 0x6063593), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3237457 data_alloc: 251658240 data_used: 37117305
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 ms_handle_reset con 0x55a68d324800 session 0x55a68b661c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 25444352 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 ms_handle_reset con 0x55a6916e4000 session 0x55a68edd6a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 25436160 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 ms_handle_reset con 0x55a68b61e800 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 ms_handle_reset con 0x55a68c36f400 session 0x55a68a823c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 ms_handle_reset con 0x55a68b20d800 session 0x55a68fdf7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 25436160 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f4a4c000/0x0/0x4ffc00000, data 0x4eb936e/0x50fe000, compress 0x0/0x0/0x0, omap 0x4cb01, meta 0x60634ff), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197189632 unmapped: 25419776 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197189632 unmapped: 25419776 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3224031 data_alloc: 251658240 data_used: 37117207
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 197189632 unmapped: 25419776 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f4a4c000/0x0/0x4ffc00000, data 0x4eb936e/0x50fe000, compress 0x0/0x0/0x0, omap 0x4cb01, meta 0x60634ff), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 ms_handle_reset con 0x55a68b621000 session 0x55a689e05180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.088283539s of 12.368220329s, submitted: 28
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 ms_handle_reset con 0x55a68d324800 session 0x55a68d0ed500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f56e9000/0x0/0x4ffc00000, data 0x421be34/0x4461000, compress 0x0/0x0/0x0, omap 0x4ce0d, meta 0x60631f3), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 ms_handle_reset con 0x55a6916e4000 session 0x55a68f8ae540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 ms_handle_reset con 0x55a68b20d800 session 0x55a68fdf6700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090517 data_alloc: 251658240 data_used: 27455751
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 ms_handle_reset con 0x55a68b621000 session 0x55a68fdf7500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 ms_handle_reset con 0x55a68c36f400 session 0x55a68b598700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f56eb000/0x0/0x4ffc00000, data 0x421be34/0x4461000, compress 0x0/0x0/0x0, omap 0x4c639, meta 0x60639c7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 31096832 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3091742 data_alloc: 251658240 data_used: 27503879
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f56eb000/0x0/0x4ffc00000, data 0x421be34/0x4461000, compress 0x0/0x0/0x0, omap 0x4c639, meta 0x60639c7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f56eb000/0x0/0x4ffc00000, data 0x421be34/0x4461000, compress 0x0/0x0/0x0, omap 0x4c639, meta 0x60639c7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3091742 data_alloc: 251658240 data_used: 27503879
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 31072256 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 heartbeat osd_stat(store_statfs(0x4f56eb000/0x0/0x4ffc00000, data 0x421be34/0x4461000, compress 0x0/0x0/0x0, omap 0x4c639, meta 0x60639c7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.151979446s of 16.191749573s, submitted: 24
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191545344 unmapped: 31064064 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f56e6000/0x0/0x4ffc00000, data 0x421da27/0x4464000, compress 0x0/0x0/0x0, omap 0x4c639, meta 0x60639c7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68caa9c00 session 0x55a68c320e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191545344 unmapped: 31064064 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a690ae2000 session 0x55a68d0ec380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a690ae2000 session 0x55a68c320fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68b20d800 session 0x55a68ce981c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191545344 unmapped: 31064064 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68b621000 session 0x55a68b26ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68c36f400 session 0x55a68b63c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3106723 data_alloc: 251658240 data_used: 28056839
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f56e4000/0x0/0x4ffc00000, data 0x421da99/0x4466000, compress 0x0/0x0/0x0, omap 0x4cacb, meta 0x6063535), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191545344 unmapped: 31064064 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68caa9c00 session 0x55a68b63ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191561728 unmapped: 31047680 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68b20d800 session 0x55a68b63dc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191569920 unmapped: 31039488 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68b621000 session 0x55a68ce988c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191569920 unmapped: 31039488 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68c36f400 session 0x55a68b63c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a690ae2000 session 0x55a68cdf2c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f56e8000/0x0/0x4ffc00000, data 0x421da27/0x4464000, compress 0x0/0x0/0x0, omap 0x4cd65, meta 0x606329b), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3104962 data_alloc: 251658240 data_used: 28785927
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 ms_handle_reset con 0x55a68d34d800 session 0x55a68a933180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.281746864s of 10.434130669s, submitted: 57
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3107736 data_alloc: 251658240 data_used: 28785927
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f56e5000/0x0/0x4ffc00000, data 0x421f66e/0x4467000, compress 0x0/0x0/0x0, omap 0x4d1f9, meta 0x6062e07), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f56e5000/0x0/0x4ffc00000, data 0x421f66e/0x4467000, compress 0x0/0x0/0x0, omap 0x4d1f9, meta 0x6062e07), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3107640 data_alloc: 251658240 data_used: 28777735
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191586304 unmapped: 31023104 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f56d8000/0x0/0x4ffc00000, data 0x422127d/0x446a000, compress 0x0/0x0/0x0, omap 0x4d1f9, meta 0x6062e07), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3111134 data_alloc: 251658240 data_used: 28777735
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 428 heartbeat osd_stat(store_statfs(0x4f56d8000/0x0/0x4ffc00000, data 0x422127d/0x446a000, compress 0x0/0x0/0x0, omap 0x4d1f9, meta 0x6062e07), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3111134 data_alloc: 251658240 data_used: 28777735
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 31014912 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.125097275s of 18.320951462s, submitted: 12
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f56dd000/0x0/0x4ffc00000, data 0x4222d53/0x446d000, compress 0x0/0x0/0x0, omap 0x4d269, meta 0x6062d97), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 429 ms_handle_reset con 0x55a68d324800 session 0x55a68f93ce00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 429 ms_handle_reset con 0x55a68b5df400 session 0x55a68d0c6380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 429 ms_handle_reset con 0x55a68b20d800 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191750144 unmapped: 30859264 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191750144 unmapped: 30859264 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 429 handle_osd_map epochs [429,430], i have 430, src has [1,430]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191758336 unmapped: 30851072 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191758336 unmapped: 30851072 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3120626 data_alloc: 251658240 data_used: 29818119
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191758336 unmapped: 30851072 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f56d8000/0x0/0x4ffc00000, data 0x4224946/0x4470000, compress 0x0/0x0/0x0, omap 0x4cd31, meta 0x60632cf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 191840256 unmapped: 30769152 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68c36f400 session 0x55a68a933c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68b621000 session 0x55a68b598540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68b20d800 session 0x55a68cdfe380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68b618400 session 0x55a68a800fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68caa8800 session 0x55a68b26c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2905948 data_alloc: 234881024 data_used: 20485879
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f7728000/0x0/0x4ffc00000, data 0x21d659b/0x2424000, compress 0x0/0x0/0x0, omap 0x4cd31, meta 0x60632cf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68d320000 session 0x55a68c28c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.035780907s of 10.182981491s, submitted: 44
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68d323000 session 0x55a68c28c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 heartbeat osd_stat(store_statfs(0x4f7728000/0x0/0x4ffc00000, data 0x21d659b/0x2424000, compress 0x0/0x0/0x0, omap 0x4cd31, meta 0x60632cf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 ms_handle_reset con 0x55a68b20d800 session 0x55a68fcfbdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 432 ms_handle_reset con 0x55a68b618400 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f7722000/0x0/0x4ffc00000, data 0x21d81f2/0x2428000, compress 0x0/0x0/0x0, omap 0x4cfcd, meta 0x6063033), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 433 ms_handle_reset con 0x55a68caa8800 session 0x55a68c28c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2913824 data_alloc: 234881024 data_used: 20485977
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 433 ms_handle_reset con 0x55a68d320000 session 0x55a690092e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 433 ms_handle_reset con 0x55a68d324400 session 0x55a68b598540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x21d9e29/0x242a000, compress 0x0/0x0/0x0, omap 0x4cfcd, meta 0x6063033), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f7720000/0x0/0x4ffc00000, data 0x21d9e29/0x242a000, compress 0x0/0x0/0x0, omap 0x4cfcd, meta 0x6063033), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2915978 data_alloc: 234881024 data_used: 20485977
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f771d000/0x0/0x4ffc00000, data 0x21dba38/0x242d000, compress 0x0/0x0/0x0, omap 0x4cfcd, meta 0x6063033), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.298319817s of 11.351326942s, submitted: 25
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f771a000/0x0/0x4ffc00000, data 0x21dd50e/0x2430000, compress 0x0/0x0/0x0, omap 0x4d2d9, meta 0x6062d27), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 39337984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 436 ms_handle_reset con 0x55a68b20d800 session 0x55a68c8dfdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 436 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x21df101/0x2433000, compress 0x0/0x0/0x0, omap 0x4d2d2, meta 0x6062d2e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 436 ms_handle_reset con 0x55a68b618400 session 0x55a68b26ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923524 data_alloc: 234881024 data_used: 20485977
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 436 ms_handle_reset con 0x55a68caa8800 session 0x55a68b598700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 436 ms_handle_reset con 0x55a68d320000 session 0x55a689e05180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183279616 unmapped: 39329792 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 437 ms_handle_reset con 0x55a68d324400 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182558720 unmapped: 40050688 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2925834 data_alloc: 234881024 data_used: 20485977
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 437 ms_handle_reset con 0x55a68b20d800 session 0x55a68d0ec540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 437 heartbeat osd_stat(store_statfs(0x4f7714000/0x0/0x4ffc00000, data 0x21e0d48/0x2436000, compress 0x0/0x0/0x0, omap 0x4d2d2, meta 0x6062d2e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182566912 unmapped: 40042496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 438 ms_handle_reset con 0x55a68b618400 session 0x55a68ce98c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 438 ms_handle_reset con 0x55a68caa8800 session 0x55a68cdf2e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 438 ms_handle_reset con 0x55a68d320000 session 0x55a68c321dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182583296 unmapped: 40026112 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182583296 unmapped: 40026112 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182583296 unmapped: 40026112 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182583296 unmapped: 40026112 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.423294067s of 12.549748421s, submitted: 60
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 438 handle_osd_map epochs [438,439], i have 439, src has [1,439]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2931273 data_alloc: 234881024 data_used: 20485977
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f7711000/0x0/0x4ffc00000, data 0x21e298f/0x2439000, compress 0x0/0x0/0x0, omap 0x4d8da, meta 0x6062726), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 182591488 unmapped: 40017920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 440 ms_handle_reset con 0x55a68d324400 session 0x55a68ce99880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183640064 unmapped: 38969344 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 440 ms_handle_reset con 0x55a68b20d800 session 0x55a68f8afc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 440 heartbeat osd_stat(store_statfs(0x4f7707000/0x0/0x4ffc00000, data 0x21e6101/0x2441000, compress 0x0/0x0/0x0, omap 0x4dbae, meta 0x6062452), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183640064 unmapped: 38969344 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 440 ms_handle_reset con 0x55a68b618400 session 0x55a68a801180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183640064 unmapped: 38969344 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 441 ms_handle_reset con 0x55a68caa8800 session 0x55a68f8af180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 38952960 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 441 ms_handle_reset con 0x55a68d320000 session 0x55a68a801340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941330 data_alloc: 234881024 data_used: 20486834
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 38952960 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 38952960 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 ms_handle_reset con 0x55a68d32c800 session 0x55a68f8ae8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 39165952 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 ms_handle_reset con 0x55a68b20d800 session 0x55a68c8df500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f76fe000/0x0/0x4ffc00000, data 0x21eb449/0x244a000, compress 0x0/0x0/0x0, omap 0x4d1ae, meta 0x6062e52), peers [0,1] op hist [0,0,0,0,1])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 ms_handle_reset con 0x55a68b618400 session 0x55a68b598380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 39165952 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 ms_handle_reset con 0x55a68caa8800 session 0x55a68cdf3340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 ms_handle_reset con 0x55a68d320000 session 0x55a68cdff6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 39165952 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2946437 data_alloc: 234881024 data_used: 20487691
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 443 handle_osd_map epochs [443,444], i have 444, src has [1,444]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.516530037s of 10.912390709s, submitted: 52
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 444 ms_handle_reset con 0x55a68d320c00 session 0x55a68f8af500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 444 ms_handle_reset con 0x55a68b20d800 session 0x55a68a3fbc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f7700000/0x0/0x4ffc00000, data 0x21ecebe/0x244c000, compress 0x0/0x0/0x0, omap 0x4d21e, meta 0x6062de2), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 444 ms_handle_reset con 0x55a68b618400 session 0x55a689e05a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2949860 data_alloc: 234881024 data_used: 20487963
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 445 ms_handle_reset con 0x55a68caa8800 session 0x55a68fdf7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f76fb000/0x0/0x4ffc00000, data 0x21eeab1/0x244f000, compress 0x0/0x0/0x0, omap 0x4d21e, meta 0x6062de2), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 445 ms_handle_reset con 0x55a68d320000 session 0x55a68f93c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2955984 data_alloc: 234881024 data_used: 20487963
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f76fb000/0x0/0x4ffc00000, data 0x21eeab1/0x244f000, compress 0x0/0x0/0x0, omap 0x4d21e, meta 0x6062de2), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 446 ms_handle_reset con 0x55a68b5e8800 session 0x55a6900928c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f76f8000/0x0/0x4ffc00000, data 0x21f06a4/0x2452000, compress 0x0/0x0/0x0, omap 0x4d21e, meta 0x6062de2), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.672989845s of 13.712483406s, submitted: 22
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958758 data_alloc: 234881024 data_used: 20487963
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 447 heartbeat osd_stat(store_statfs(0x4f76f5000/0x0/0x4ffc00000, data 0x21f2297/0x2455000, compress 0x0/0x0/0x0, omap 0x4d4ba, meta 0x6062b46), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 447 ms_handle_reset con 0x55a68b20d800 session 0x55a68b63d180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 39149568 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 ms_handle_reset con 0x55a68b5e8800 session 0x55a689e041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 ms_handle_reset con 0x55a68b618400 session 0x55a68b660700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 ms_handle_reset con 0x55a68caa8800 session 0x55a689e04e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 ms_handle_reset con 0x55a68d320000 session 0x55a68a823340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 ms_handle_reset con 0x55a68b20d800 session 0x55a68a801c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 38871040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 ms_handle_reset con 0x55a68b5e8800 session 0x55a68fcfb180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183738368 unmapped: 38871040 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3000919 data_alloc: 234881024 data_used: 20488548
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f70a9000/0x0/0x4ffc00000, data 0x283bf40/0x2aa1000, compress 0x0/0x0/0x0, omap 0x4d4ba, meta 0x6062b46), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 448 handle_osd_map epochs [449,449], i have 449, src has [1,449]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183672832 unmapped: 38936576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 449 ms_handle_reset con 0x55a68b618400 session 0x55a68d0ed500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183672832 unmapped: 38936576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 449 ms_handle_reset con 0x55a68caa8800 session 0x55a68cdfe000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183672832 unmapped: 38936576 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 449 heartbeat osd_stat(store_statfs(0x4f70a6000/0x0/0x4ffc00000, data 0x283db87/0x2aa4000, compress 0x0/0x0/0x0, omap 0x4d4ba, meta 0x6062b46), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 449 ms_handle_reset con 0x55a68c36fc00 session 0x55a68b183180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 449 ms_handle_reset con 0x55a68b20d800 session 0x55a68c8defc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 38584320 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 38584320 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3007845 data_alloc: 234881024 data_used: 20573129
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.175423622s of 11.398298264s, submitted: 54
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 38584320 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f707d000/0x0/0x4ffc00000, data 0x2867b97/0x2acf000, compress 0x0/0x0/0x0, omap 0x4d4ba, meta 0x6062b46), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 38584320 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f7078000/0x0/0x4ffc00000, data 0x286966d/0x2ad2000, compress 0x0/0x0/0x0, omap 0x4d78e, meta 0x6062872), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 38584320 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 450 ms_handle_reset con 0x55a68caa8800 session 0x55a68fcfba40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184057856 unmapped: 38551552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 451 ms_handle_reset con 0x55a68a7c8800 session 0x55a68a8228c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184074240 unmapped: 38535168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 452 ms_handle_reset con 0x55a68a7c9800 session 0x55a689e05dc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058256 data_alloc: 234881024 data_used: 26875609
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 452 ms_handle_reset con 0x55a68d329c00 session 0x55a68b63dc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184082432 unmapped: 38526976 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 452 ms_handle_reset con 0x55a68a7c8800 session 0x55a68f8ae380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f706d000/0x0/0x4ffc00000, data 0x286d3ba/0x2adb000, compress 0x0/0x0/0x0, omap 0x4d820, meta 0x60627e0), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184090624 unmapped: 38518784 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 453 ms_handle_reset con 0x55a68b20d800 session 0x55a68c8df340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184098816 unmapped: 38510592 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 454 ms_handle_reset con 0x55a68a7c9800 session 0x55a68cdfea80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f706c000/0x0/0x4ffc00000, data 0x286efad/0x2ade000, compress 0x0/0x0/0x0, omap 0x4d37a, meta 0x6062c86), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184098816 unmapped: 38510592 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f7066000/0x0/0x4ffc00000, data 0x2870c02/0x2ae2000, compress 0x0/0x0/0x0, omap 0x38f7a, meta 0x6077086), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 455 ms_handle_reset con 0x55a68b620800 session 0x55a68f93ca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 455 ms_handle_reset con 0x55a68caa8800 session 0x55a68b532a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 38486016 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3105143 data_alloc: 234881024 data_used: 26899892
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187367424 unmapped: 35241984 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.852461815s of 10.308054924s, submitted: 120
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 455 heartbeat osd_stat(store_statfs(0x4f6761000/0x0/0x4ffc00000, data 0x31777e7/0x33e9000, compress 0x0/0x0/0x0, omap 0x3899c, meta 0x6077664), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 455 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b1888c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188973056 unmapped: 33636352 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 455 handle_osd_map epochs [455,456], i have 456, src has [1,456]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 456 ms_handle_reset con 0x55a68b20d800 session 0x55a68c320fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 33701888 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 456 ms_handle_reset con 0x55a68b620800 session 0x55a68c321880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 456 ms_handle_reset con 0x55a68a7c9800 session 0x55a68b26cc40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 33701888 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 458 ms_handle_reset con 0x55a68ca93000 session 0x55a68c28c540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 458 ms_handle_reset con 0x55a68d32c000 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189169664 unmapped: 33439744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3145969 data_alloc: 251658240 data_used: 27110836
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189169664 unmapped: 33439744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189169664 unmapped: 33439744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 459 heartbeat osd_stat(store_statfs(0x4f66d2000/0x0/0x4ffc00000, data 0x3201903/0x3478000, compress 0x0/0x0/0x0, omap 0x38c94, meta 0x607736c), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189169664 unmapped: 33439744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 459 ms_handle_reset con 0x55a68a7c8800 session 0x55a68cdff500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189169664 unmapped: 33439744 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 460 ms_handle_reset con 0x55a68b20d800 session 0x55a688dd3180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 461 ms_handle_reset con 0x55a68a7c9800 session 0x55a68d0ed180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 33431552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3153789 data_alloc: 251658240 data_used: 27112090
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 461 ms_handle_reset con 0x55a68b620800 session 0x55a68b532fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 33431552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 461 ms_handle_reset con 0x55a68a7c8800 session 0x55a68a800fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f66aa000/0x0/0x4ffc00000, data 0x3225159/0x349e000, compress 0x0/0x0/0x0, omap 0x39259, meta 0x6076da7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 33431552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 33431552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 461 handle_osd_map epochs [461,462], i have 462, src has [1,462]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.642642021s of 11.877299309s, submitted: 105
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 33431552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 33431552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3155907 data_alloc: 251658240 data_used: 27112675
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 462 ms_handle_reset con 0x55a68a7c9800 session 0x55a68fdf6380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189177856 unmapped: 33431552 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f66a0000/0x0/0x4ffc00000, data 0x3230e1e/0x34ac000, compress 0x0/0x0/0x0, omap 0x39259, meta 0x6076da7), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189186048 unmapped: 33423360 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 463 ms_handle_reset con 0x55a68b20d800 session 0x55a6905041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189194240 unmapped: 33415168 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 464 ms_handle_reset con 0x55a68b621c00 session 0x55a68b26ca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189472768 unmapped: 33136640 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 465 ms_handle_reset con 0x55a68b5e8400 session 0x55a68a933c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 465 ms_handle_reset con 0x55a68d32c000 session 0x55a68c7d76c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189497344 unmapped: 33112064 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f6692000/0x0/0x4ffc00000, data 0x32361c8/0x34b6000, compress 0x0/0x0/0x0, omap 0x4d831, meta 0x60627cf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3168773 data_alloc: 251658240 data_used: 27112773
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f6693000/0x0/0x4ffc00000, data 0x32391c8/0x34b9000, compress 0x0/0x0/0x0, omap 0x4d831, meta 0x60627cf), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b63d500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 ms_handle_reset con 0x55a68a7c9800 session 0x55a68b532fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190545920 unmapped: 32063488 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190545920 unmapped: 32063488 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 ms_handle_reset con 0x55a68b20d800 session 0x55a68b532a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f668e000/0x0/0x4ffc00000, data 0x323adf5/0x34bc000, compress 0x0/0x0/0x0, omap 0x4da35, meta 0x60625cb), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190545920 unmapped: 32063488 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.506417274s of 10.689734459s, submitted: 91
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 ms_handle_reset con 0x55a68ca92800 session 0x55a68c8dfdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 ms_handle_reset con 0x55a68b621c00 session 0x55a68edd7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 ms_handle_reset con 0x55a68a7c8800 session 0x55a68cdf3880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190676992 unmapped: 31932416 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 466 handle_osd_map epochs [466,467], i have 467, src has [1,467]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 467 ms_handle_reset con 0x55a68a7c9800 session 0x55a68ce981c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 467 ms_handle_reset con 0x55a68b20d800 session 0x55a690092e00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 467 ms_handle_reset con 0x55a68ca92800 session 0x55a68b26ddc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 467 ms_handle_reset con 0x55a68b5e8400 session 0x55a68f8ae380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190685184 unmapped: 31924224 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3176917 data_alloc: 251658240 data_used: 27114617
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 468 ms_handle_reset con 0x55a68a7c8800 session 0x55a68ce98c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190717952 unmapped: 31891456 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 468 ms_handle_reset con 0x55a68a7c9800 session 0x55a6905041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190783488 unmapped: 31825920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 469 ms_handle_reset con 0x55a68b20d800 session 0x55a68fcfbc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f6687000/0x0/0x4ffc00000, data 0x324023c/0x34c3000, compress 0x0/0x0/0x0, omap 0x4e23c, meta 0x6061dc4), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190783488 unmapped: 31825920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f6687000/0x0/0x4ffc00000, data 0x324023c/0x34c3000, compress 0x0/0x0/0x0, omap 0x4e23c, meta 0x6061dc4), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190783488 unmapped: 31825920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 470 ms_handle_reset con 0x55a68d32c000 session 0x55a68f8af6c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190783488 unmapped: 31825920 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3183049 data_alloc: 251658240 data_used: 27115312
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 470 ms_handle_reset con 0x55a68b621c00 session 0x55a68cdfe000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 471 ms_handle_reset con 0x55a68d32c000 session 0x55a68a3faa80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 471 ms_handle_reset con 0x55a68a7c8800 session 0x55a68fcfb180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190799872 unmapped: 31809536 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 471 ms_handle_reset con 0x55a68b5e8800 session 0x55a68b5328c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 471 ms_handle_reset con 0x55a68b618400 session 0x55a68edd7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190799872 unmapped: 31809536 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 472 ms_handle_reset con 0x55a68a7c8800 session 0x55a68c8df340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 38420480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f7680000/0x0/0x4ffc00000, data 0x22481f4/0x24ca000, compress 0x0/0x0/0x0, omap 0x4e66c, meta 0x6061994), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 38420480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f76a6000/0x0/0x4ffc00000, data 0x221fe77/0x24a3000, compress 0x0/0x0/0x0, omap 0x4e66c, meta 0x6061994), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 38420480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053258 data_alloc: 234881024 data_used: 20494894
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f76a6000/0x0/0x4ffc00000, data 0x221fe77/0x24a3000, compress 0x0/0x0/0x0, omap 0x4e66c, meta 0x6061994), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.631689072s of 12.161313057s, submitted: 152
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 38420480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 38420480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 474 ms_handle_reset con 0x55a68b5e8800 session 0x55a68f93c700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 38420480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 474 ms_handle_reset con 0x55a68b621c00 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 474 ms_handle_reset con 0x55a68d32c000 session 0x55a68cdf3180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f76a4000/0x0/0x4ffc00000, data 0x22219bc/0x24a6000, compress 0x0/0x0/0x0, omap 0x4e944, meta 0x60616bc), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 38420480 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 475 ms_handle_reset con 0x55a68a7c9800 session 0x55a68d0ecc40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184197120 unmapped: 38412288 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3061285 data_alloc: 234881024 data_used: 20498892
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 475 ms_handle_reset con 0x55a68a7c8800 session 0x55a68cdfe540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 476 ms_handle_reset con 0x55a68b5e8800 session 0x55a68c8df180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 38428672 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 476 ms_handle_reset con 0x55a68b621c00 session 0x55a68a933c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 38486016 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 476 ms_handle_reset con 0x55a68d32c000 session 0x55a68f93c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 38486016 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f76a1000/0x0/0x4ffc00000, data 0x222528f/0x24ab000, compress 0x0/0x0/0x0, omap 0x4f384, meta 0x6060c7c), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 38486016 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f76a1000/0x0/0x4ffc00000, data 0x222528f/0x24ab000, compress 0x0/0x0/0x0, omap 0x4f384, meta 0x6060c7c), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 38486016 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062380 data_alloc: 234881024 data_used: 20500146
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 38486016 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 476 handle_osd_map epochs [476,477], i have 477, src has [1,477]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.085775375s of 10.239592552s, submitted: 86
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 477 ms_handle_reset con 0x55a68b20d800 session 0x55a68f8afc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 477 ms_handle_reset con 0x55a68a7c8800 session 0x55a68f8af880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 477 ms_handle_reset con 0x55a68b5e8800 session 0x55a68f8ae8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 478 ms_handle_reset con 0x55a68b621c00 session 0x55a68cdffc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 478 heartbeat osd_stat(store_statfs(0x4f769c000/0x0/0x4ffc00000, data 0x2226e9e/0x24ae000, compress 0x0/0x0/0x0, omap 0x4f384, meta 0x6060c7c), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066600 data_alloc: 234881024 data_used: 19451570
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 479 ms_handle_reset con 0x55a68d32c000 session 0x55a68f8ae8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f7696000/0x0/0x4ffc00000, data 0x222a72c/0x24b4000, compress 0x0/0x0/0x0, omap 0x4f692, meta 0x606096e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072612 data_alloc: 234881024 data_used: 19451570
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 39084032 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f7691000/0x0/0x4ffc00000, data 0x222c373/0x24b7000, compress 0x0/0x0/0x0, omap 0x4f692, meta 0x606096e), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 480 handle_osd_map epochs [481,481], i have 481, src has [1,481]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.352125168s of 10.416952133s, submitted: 34
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 39075840 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f7690000/0x0/0x4ffc00000, data 0x222de65/0x24ba000, compress 0x0/0x0/0x0, omap 0x4f9a0, meta 0x6060660), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 481 handle_osd_map epochs [482,482], i have 482, src has [1,482]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 39075840 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 482 ms_handle_reset con 0x55a68b5e8400 session 0x55a68b183180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 39075840 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183582720 unmapped: 39026688 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3081398 data_alloc: 234881024 data_used: 19451570
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183590912 unmapped: 39018496 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f7687000/0x0/0x4ffc00000, data 0x22331ad/0x24c3000, compress 0x0/0x0/0x0, omap 0x4fcae, meta 0x6060352), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 39182336 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 39182336 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 485 ms_handle_reset con 0x55a68a7c8800 session 0x55a68cdfe540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 39165952 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 39165952 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089384 data_alloc: 234881024 data_used: 19451842
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 486 ms_handle_reset con 0x55a68b5e8800 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 39165952 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.154710770s of 10.247894287s, submitted: 43
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f767f000/0x0/0x4ffc00000, data 0x2236a03/0x24c9000, compress 0x0/0x0/0x0, omap 0x4fcae, meta 0x6060352), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: mgrc ms_handle_reset ms_handle_reset con 0x55a68a7d0000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2608678704
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2608678704,v1:192.168.122.100:6801/2608678704]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: mgrc handle_mgr_configure stats_period=5
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187891712 unmapped: 34717696 heap: 222609408 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 487 ms_handle_reset con 0x55a68b621c00 session 0x55a68edd7340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183713792 unmapped: 43098112 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 487 ms_handle_reset con 0x55a68d32c000 session 0x55a68fcfbc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183713792 unmapped: 43098112 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f6085000/0x0/0x4ffc00000, data 0x38314f5/0x3ac5000, compress 0x0/0x0/0x0, omap 0x4f7e8, meta 0x6060818), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183713792 unmapped: 43098112 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 488 ms_handle_reset con 0x55a68b5e8c00 session 0x55a6905041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3216345 data_alloc: 234881024 data_used: 19451842
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183730176 unmapped: 43081728 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183746560 unmapped: 43065344 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 489 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b599a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 490 ms_handle_reset con 0x55a68b5e8800 session 0x55a68ce981c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 43057152 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 490 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68b660000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 490 ms_handle_reset con 0x55a68b621c00 session 0x55a68cdfea80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f607a000/0x0/0x4ffc00000, data 0x3836821/0x3ace000, compress 0x0/0x0/0x0, omap 0x4f5be, meta 0x6060a42), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 43057152 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 43057152 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 490 ms_handle_reset con 0x55a68d32c000 session 0x55a68edd6540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3223405 data_alloc: 234881024 data_used: 19452427
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 43057152 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 491 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b63dc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 43057152 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 491 ms_handle_reset con 0x55a68b5e8800 session 0x55a68fcfae00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.995810509s of 10.892910004s, submitted: 83
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f6079000/0x0/0x4ffc00000, data 0x38384a2/0x3ad3000, compress 0x0/0x0/0x0, omap 0x4f086, meta 0x6060f7a), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 43057152 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 492 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68cdf2380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183754752 unmapped: 43057152 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 492 ms_handle_reset con 0x55a68b621c00 session 0x55a68c8dfa40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 492 ms_handle_reset con 0x55a68b19fc00 session 0x55a68c320a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 492 ms_handle_reset con 0x55a68a7c8800 session 0x55a68cdf3880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183910400 unmapped: 42901504 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3232070 data_alloc: 234881024 data_used: 19453012
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183918592 unmapped: 42893312 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 493 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68c28c540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183926784 unmapped: 42885120 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 493 handle_osd_map epochs [493,494], i have 494, src has [1,494]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 42876928 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 494 heartbeat osd_stat(store_statfs(0x4f604f000/0x0/0x4ffc00000, data 0x385fb69/0x3afb000, compress 0x0/0x0/0x0, omap 0x4f0f8, meta 0x6060f08), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 494 ms_handle_reset con 0x55a68ca94000 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 42876928 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 42876928 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244610 data_alloc: 234881024 data_used: 20551764
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 495 handle_osd_map epochs [495,496], i have 495, src has [1,496]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185008128 unmapped: 41803776 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185008128 unmapped: 41803776 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f6044000/0x0/0x4ffc00000, data 0x3864eb1/0x3b04000, compress 0x0/0x0/0x0, omap 0x4f406, meta 0x6060bfa), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185008128 unmapped: 41803776 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.563153267s of 10.653317451s, submitted: 56
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 496 ms_handle_reset con 0x55a68b5e9400 session 0x55a68f93c1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185008128 unmapped: 41803776 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185008128 unmapped: 41803776 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 497 ms_handle_reset con 0x55a68cf04000 session 0x55a68f8af180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254025 data_alloc: 234881024 data_used: 20551764
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 497 ms_handle_reset con 0x55a68cf04000 session 0x55a68c28d180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185057280 unmapped: 41754624 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 498 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b532c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 498 heartbeat osd_stat(store_statfs(0x4f6043000/0x0/0x4ffc00000, data 0x3866ac0/0x3b07000, compress 0x0/0x0/0x0, omap 0x4e996, meta 0x606166a), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 185057280 unmapped: 41754624 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189538304 unmapped: 37273600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 498 ms_handle_reset con 0x55a68b5e8c00 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 498 ms_handle_reset con 0x55a68b5e9400 session 0x55a68a3faa80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192954368 unmapped: 33857536 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193404928 unmapped: 33406976 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321707 data_alloc: 234881024 data_used: 21168386
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 498 heartbeat osd_stat(store_statfs(0x4f5862000/0x0/0x4ffc00000, data 0x3c68707/0x3f0a000, compress 0x0/0x0/0x0, omap 0x4e45e, meta 0x6061ba2), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 498 handle_osd_map epochs [498,499], i have 499, src has [1,499]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f55f8000/0x0/0x4ffc00000, data 0x42b2707/0x4554000, compress 0x0/0x0/0x0, omap 0x4e45e, meta 0x6061ba2), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 38313984 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 38313984 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 38313984 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 38313984 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.662978172s of 11.240156174s, submitted: 128
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 499 ms_handle_reset con 0x55a68ca94000 session 0x55a68b188000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 38313984 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327651 data_alloc: 234881024 data_used: 21168386
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f55f3000/0x0/0x4ffc00000, data 0x42b426b/0x4559000, compress 0x0/0x0/0x0, omap 0x4e4d0, meta 0x6061b30), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 499 handle_osd_map epochs [499,500], i have 499, src has [1,500]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 500 ms_handle_reset con 0x55a68ca94000 session 0x55a68b63ca80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 38313984 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 500 ms_handle_reset con 0x55a68a7c8800 session 0x55a68a822c40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 38305792 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f55ee000/0x0/0x4ffc00000, data 0x42b5e96/0x455c000, compress 0x0/0x0/0x0, omap 0x4e542, meta 0x6061abe), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 500 ms_handle_reset con 0x55a68b5e8800 session 0x55a68a8016c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 500 ms_handle_reset con 0x55a68b621c00 session 0x55a68c321880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 501 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68a3fa8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 501 ms_handle_reset con 0x55a68b5e8c00 session 0x55a689e04fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 501 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b26c380
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 501 ms_handle_reset con 0x55a68b5e8800 session 0x55a68b533500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 501 heartbeat osd_stat(store_statfs(0x4f5611000/0x0/0x4ffc00000, data 0x4293a6b/0x4539000, compress 0x0/0x0/0x0, omap 0x4e7de, meta 0x6061822), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325606 data_alloc: 234881024 data_used: 21089127
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 502 ms_handle_reset con 0x55a68b621c00 session 0x55a68b188000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 502 ms_handle_reset con 0x55a68ca94000 session 0x55a68b63d500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 502 ms_handle_reset con 0x55a68a7c8800 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 502 ms_handle_reset con 0x55a68b5e8800 session 0x55a689e048c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187490304 unmapped: 39321600 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3329378 data_alloc: 234881024 data_used: 21089127
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f560d000/0x0/0x4ffc00000, data 0x429567a/0x453c000, compress 0x0/0x0/0x0, omap 0x4e7de, meta 0x6061822), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 502 handle_osd_map epochs [503,503], i have 503, src has [1,503]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.155028343s of 11.272736549s, submitted: 55
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187498496 unmapped: 39313408 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f560b000/0x0/0x4ffc00000, data 0x42972c1/0x453f000, compress 0x0/0x0/0x0, omap 0x4e7de, meta 0x6061822), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187506688 unmapped: 39305216 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 504 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68c7d76c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 504 handle_osd_map epochs [504,505], i have 504, src has [1,505]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187523072 unmapped: 39288832 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187523072 unmapped: 39288832 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187523072 unmapped: 39288832 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3346222 data_alloc: 234881024 data_used: 21090325
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 505 ms_handle_reset con 0x55a68b621c00 session 0x55a68fcfae00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 187523072 unmapped: 39288832 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 38256640 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f5602000/0x0/0x4ffc00000, data 0x429c619/0x4548000, compress 0x0/0x0/0x0, omap 0x4eaec, meta 0x6061514), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 507 ms_handle_reset con 0x55a68b5e9400 session 0x55a68edd6540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 38256640 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 507 ms_handle_reset con 0x55a68a7c8800 session 0x55a68ce981c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 507 handle_osd_map epochs [507,508], i have 507, src has [1,508]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 38191104 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 38191104 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357945 data_alloc: 234881024 data_used: 21090325
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 38191104 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 38191104 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 508 heartbeat osd_stat(store_statfs(0x4f55f7000/0x0/0x4ffc00000, data 0x429fe92/0x454f000, compress 0x0/0x0/0x0, omap 0x4ed88, meta 0x6061278), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 38191104 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 508 handle_osd_map epochs [508,509], i have 509, src has [1,509]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.269433975s of 12.749100685s, submitted: 92
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 509 ms_handle_reset con 0x55a68b621c00 session 0x55a68cdfe540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189669376 unmapped: 37142528 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 509 ms_handle_reset con 0x55a68cf04000 session 0x55a689e04540
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189669376 unmapped: 37142528 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 509 handle_osd_map epochs [509,510], i have 509, src has [1,510]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 510 ms_handle_reset con 0x55a68c372000 session 0x55a68fcfbc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362773 data_alloc: 234881024 data_used: 21142549
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189685760 unmapped: 37126144 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 510 heartbeat osd_stat(store_statfs(0x4f55f5000/0x0/0x4ffc00000, data 0x42a36e8/0x4555000, compress 0x0/0x0/0x0, omap 0x4ed88, meta 0x6061278), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 510 handle_osd_map epochs [511,511], i have 511, src has [1,511]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 37117952 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 37117952 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 37117952 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 37117952 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366011 data_alloc: 234881024 data_used: 21142549
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 511 heartbeat osd_stat(store_statfs(0x4f55f0000/0x0/0x4ffc00000, data 0x42a51da/0x4558000, compress 0x0/0x0/0x0, omap 0x4f05d, meta 0x6060fa3), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189693952 unmapped: 37117952 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f55eb000/0x0/0x4ffc00000, data 0x42a6cb0/0x455b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189710336 unmapped: 37101568 heap: 226811904 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 201326592 unmapped: 29687808 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.188182831s of 10.259253502s, submitted: 38
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436089 data_alloc: 234881024 data_used: 23952405
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4def000/0x0/0x4ffc00000, data 0x4aa6cb0/0x4d5b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4def000/0x0/0x4ffc00000, data 0x4aa6cb0/0x4d5b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4def000/0x0/0x4ffc00000, data 0x4aa6cb0/0x4d5b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436089 data_alloc: 234881024 data_used: 23952405
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4def000/0x0/0x4ffc00000, data 0x4aa6cb0/0x4d5b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189890560 unmapped: 41123840 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436089 data_alloc: 234881024 data_used: 23952405
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4def000/0x0/0x4ffc00000, data 0x4aa6cb0/0x4d5b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4def000/0x0/0x4ffc00000, data 0x4aa6cb0/0x4d5b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436089 data_alloc: 234881024 data_used: 23952405
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189898752 unmapped: 41115648 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.152452469s of 19.352069855s, submitted: 2
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189923328 unmapped: 41091072 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189923328 unmapped: 41091072 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189923328 unmapped: 41091072 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4df1000/0x0/0x4ffc00000, data 0x4aa6cb0/0x4d5b000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436297 data_alloc: 234881024 data_used: 23931925
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189923328 unmapped: 41091072 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189923328 unmapped: 41091072 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189923328 unmapped: 41091072 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189980672 unmapped: 41033728 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b5e8800 session 0x55a68cdffc00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68a822a80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68a7c8800 session 0x55a689e05a40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189988864 unmapped: 41025536 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435557 data_alloc: 234881024 data_used: 24083477
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189988864 unmapped: 41025536 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4df1000/0x0/0x4ffc00000, data 0x4aa6c8d/0x4d5a000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189988864 unmapped: 41025536 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189988864 unmapped: 41025536 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.728612900s of 10.772289276s, submitted: 21
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b621c00 session 0x55a6900928c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 189997056 unmapped: 41017344 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68c372000 session 0x55a68b26c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68cf04000 session 0x55a68cdf2000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68a7c8800 session 0x55a68cdfe8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68b26d500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b621c00 session 0x55a68cdfea80
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68c372000 session 0x55a68b188700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68c2d2400 session 0x55a690093180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b26cc40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190521344 unmapped: 40493056 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68cdff500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f55f2000/0x0/0x4ffc00000, data 0x42a6c8d/0x455a000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3419791 data_alloc: 234881024 data_used: 22719474
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190521344 unmapped: 40493056 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b621c00 session 0x55a68b63d180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f5187000/0x0/0x4ffc00000, data 0x470fcac/0x49c5000, compress 0x0/0x0/0x0, omap 0x4f0cf, meta 0x6060f31), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 40484864 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 40484864 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68c372000 session 0x55a689e041c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 40484864 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b296400 session 0x55a689e04700
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 40484864 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3421559 data_alloc: 234881024 data_used: 22719474
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 40484864 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68a7c8800 session 0x55a68a822fc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68a9328c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190545920 unmapped: 40468480 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f5186000/0x0/0x4ffc00000, data 0x470fccf/0x49c6000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 190750720 unmapped: 40263680 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f5186000/0x0/0x4ffc00000, data 0x470fccf/0x49c6000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f5186000/0x0/0x4ffc00000, data 0x470fccf/0x49c6000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3449849 data_alloc: 234881024 data_used: 27038210
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3449849 data_alloc: 234881024 data_used: 27038210
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f5186000/0x0/0x4ffc00000, data 0x470fccf/0x49c6000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 192413696 unmapped: 38600704 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.960254669s of 19.161136627s, submitted: 26
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195231744 unmapped: 35782656 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195698688 unmapped: 35315712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195698688 unmapped: 35315712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3494683 data_alloc: 234881024 data_used: 27076098
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195698688 unmapped: 35315712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4c1a000/0x0/0x4ffc00000, data 0x4c73ccf/0x4f2a000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195698688 unmapped: 35315712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195698688 unmapped: 35315712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195698688 unmapped: 35315712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3490667 data_alloc: 234881024 data_used: 27076098
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4c00000/0x0/0x4ffc00000, data 0x4c95ccf/0x4f4c000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.871579170s of 10.280735016s, submitted: 60
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 ms_handle_reset con 0x55a68f7a8800 session 0x55a68c321180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4bff000/0x0/0x4ffc00000, data 0x4c95d31/0x4f4d000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4bff000/0x0/0x4ffc00000, data 0x4c95d31/0x4f4d000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3492371 data_alloc: 234881024 data_used: 27076098
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4bfa000/0x0/0x4ffc00000, data 0x4c9ad31/0x4f52000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 heartbeat osd_stat(store_statfs(0x4f4bfa000/0x0/0x4ffc00000, data 0x4c9ad31/0x4f52000, compress 0x0/0x0/0x0, omap 0x4eb97, meta 0x6061469), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195166208 unmapped: 35848192 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497590 data_alloc: 234881024 data_used: 27084290
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 35840000 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195174400 unmapped: 35840000 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 513 handle_osd_map epochs [514,514], i have 513, src has [1,514]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.136790276s of 10.218562126s, submitted: 21
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 514 handle_osd_map epochs [514,515], i have 514, src has [1,515]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a3f000/0x0/0x4ffc00000, data 0x4cb010a/0x4f6b000, compress 0x0/0x0/0x0, omap 0x4ee33, meta 0x72011cd), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3503738 data_alloc: 234881024 data_used: 27084875
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68b61f000 session 0x55a689e04000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68d328000 session 0x55a68fcfbdc0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a3a000/0x0/0x4ffc00000, data 0x4cb517c/0x4f72000, compress 0x0/0x0/0x0, omap 0x4ee33, meta 0x72011cd), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3506524 data_alloc: 234881024 data_used: 27084875
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a3a000/0x0/0x4ffc00000, data 0x4cb517c/0x4f72000, compress 0x0/0x0/0x0, omap 0x4ee33, meta 0x72011cd), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193224704 unmapped: 37789696 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a3a000/0x0/0x4ffc00000, data 0x4cb517c/0x4f72000, compress 0x0/0x0/0x0, omap 0x4ee33, meta 0x72011cd), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 36683776 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 36683776 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.101689339s of 10.226959229s, submitted: 28
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b183180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194330624 unmapped: 36683776 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68d0eda40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 36675584 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3510849 data_alloc: 234881024 data_used: 27088971
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a27000/0x0/0x4ffc00000, data 0x4cc817c/0x4f85000, compress 0x0/0x0/0x0, omap 0x4e65f, meta 0x72019a1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 36675584 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194338816 unmapped: 36675584 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a27000/0x0/0x4ffc00000, data 0x4cc817c/0x4f85000, compress 0x0/0x0/0x0, omap 0x4e65f, meta 0x72019a1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 36667392 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 36667392 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194347008 unmapped: 36667392 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3509912 data_alloc: 234881024 data_used: 27088971
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68b61f000 session 0x55a68b63d880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a27000/0x0/0x4ffc00000, data 0x4cc817c/0x4f85000, compress 0x0/0x0/0x0, omap 0x4e65f, meta 0x72019a1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68f7a8800 session 0x55a68cdfe8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a21000/0x0/0x4ffc00000, data 0x4ccd18c/0x4f8b000, compress 0x0/0x0/0x0, omap 0x4e65f, meta 0x72019a1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3511800 data_alloc: 234881024 data_used: 27088971
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a6916e5800 session 0x55a68cdfe1c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.674974442s of 15.721634865s, submitted: 17
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a6916e5800 session 0x55a688dd3180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a1c000/0x0/0x4ffc00000, data 0x4cd218c/0x4f90000, compress 0x0/0x0/0x0, omap 0x4e65f, meta 0x72019a1), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68a7c8800 session 0x55a68c8dfa40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194355200 unmapped: 36659200 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3511140 data_alloc: 234881024 data_used: 27088971
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68f8af180
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194371584 unmapped: 36642816 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68b61f000 session 0x55a68a3fa8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 36634624 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68f7a8800 session 0x55a68b63d340
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68a7c8800 session 0x55a68c8df500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194379776 unmapped: 36634624 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68b63cc40
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 36626432 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 heartbeat osd_stat(store_statfs(0x4f3a1e000/0x0/0x4ffc00000, data 0x4cc610a/0x4f81000, compress 0x0/0x0/0x0, omap 0x4de8b, meta 0x7202175), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 515 handle_osd_map epochs [516,516], i have 515, src has [1,516]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 516 ms_handle_reset con 0x55a68b61f000 session 0x55a6900928c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 516 ms_handle_reset con 0x55a6916e5800 session 0x55a68fcfae00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194387968 unmapped: 36626432 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 516 handle_osd_map epochs [517,517], i have 516, src has [1,517]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 517 ms_handle_reset con 0x55a68b5de000 session 0x55a68edd7c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3515172 data_alloc: 234881024 data_used: 27088873
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194412544 unmapped: 36601856 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 517 ms_handle_reset con 0x55a68a7c8800 session 0x55a68cdff500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194420736 unmapped: 36593664 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 517 handle_osd_map epochs [518,518], i have 517, src has [1,518]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 518 ms_handle_reset con 0x55a68b5e8c00 session 0x55a68c321c00
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195477504 unmapped: 35536896 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.786722183s of 10.023719788s, submitted: 81
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 518 ms_handle_reset con 0x55a68b61f000 session 0x55a689e05880
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195502080 unmapped: 35512320 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 518 ms_handle_reset con 0x55a68b621c00 session 0x55a689e05500
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 518 ms_handle_reset con 0x55a68c372000 session 0x55a68f93c8c0
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 195502080 unmapped: 35512320 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f3a22000/0x0/0x4ffc00000, data 0x4cc957d/0x4f87000, compress 0x0/0x0/0x0, omap 0x4e127, meta 0x7201ed9), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3516292 data_alloc: 234881024 data_used: 27089486
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 518 ms_handle_reset con 0x55a68a7c8800 session 0x55a68b660000
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 518 handle_osd_map epochs [519,519], i have 518, src has [1,519]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 519 heartbeat osd_stat(store_statfs(0x4f4439000/0x0/0x4ffc00000, data 0x42b303c/0x4570000, compress 0x0/0x0/0x0, omap 0x4dc28, meta 0x72023d8), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426960 data_alloc: 234881024 data_used: 22728668
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 519 handle_osd_map epochs [519,520], i have 519, src has [1,520]
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193650688 unmapped: 37363712 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 29 12:40:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444326749' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193658880 unmapped: 37355520 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f4437000/0x0/0x4ffc00000, data 0x42b4b12/0x4573000, compress 0x0/0x0/0x0, omap 0x4dc9a, meta 0x7202366), peers [0,1] op hist [])
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3429670 data_alloc: 234881024 data_used: 22732729
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193699840 unmapped: 37314560 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'config diff' '{prefix=config diff}'
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'config show' '{prefix=config show}'
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'counter dump' '{prefix=counter dump}'
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 193601536 unmapped: 37412864 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'counter schema' '{prefix=counter schema}'
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: prioritycache tune_memory target: 4294967296 mapped: 194084864 unmapped: 36929536 heap: 231014400 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:26 np0005601226 ceph-osd[87958]: do_command 'log dump' '{prefix=log dump}'
Jan 29 12:40:26 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19188 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:26 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 29 12:40:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610240955' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 29 12:40:26 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19192 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:26 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gtpysq", "name": "rgw_frontends"} v 0)
Jan 29 12:40:26 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gtpysq", "name": "rgw_frontends"} : dispatch
Jan 29 12:40:26 np0005601226 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 29 12:40:27 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19196 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 29 12:40:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479069047' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 29 12:40:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gtpysq", "name": "rgw_frontends"} v 0)
Jan 29 12:40:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/202914837' entity='mgr.compute-0.zvopdr' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gtpysq", "name": "rgw_frontends"} : dispatch
Jan 29 12:40:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:40:27 np0005601226 ceph-osd[85858]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 31K writes, 118K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.75 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6645 writes, 19K keys, 6645 commit groups, 1.0 writes per commit group, ingest: 18.75 MB, 0.03 MB/s#012Interval WAL: 6645 writes, 2906 syncs, 2.29 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:40:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 29 12:40:27 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19198 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:27 np0005601226 nova_compute[239456]: 2026-01-29 17:40:27.499 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:27 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 29 12:40:27 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197699673' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 29 12:40:27 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19202 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 29 12:40:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248549494' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 29 12:40:28 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:28 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19206 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 29 12:40:28 np0005601226 nova_compute[239456]: 2026-01-29 17:40:28.584 239460 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 29 12:40:28 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 29 12:40:28 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3926240276' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 29 12:40:28 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19210 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 12:40:29 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19214 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 12:40:29 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 29 12:40:29 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1527197502' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 29 12:40:29 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19216 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 12:40:30 np0005601226 ceph-mgr[75527]: log_channel(audit) log [DBG] : from='client.19220 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 29 12:40:30 np0005601226 ceph-mgr[75527]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 12:40:30 np0005601226 ceph-cc5c72e3-31e0-58b9-8731-456117d38f4a-mgr-compute-0-zvopdr[75523]: 2026-01-29T17:40:30.017+0000 7fa574c41640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642398 data_alloc: 234881024 data_used: 10657108
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 ms_handle_reset con 0x55f5152ec000 session 0x55f5143c9340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 16826368 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 ms_handle_reset con 0x55f5152ec400 session 0x55f51226e000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 heartbeat osd_stat(store_statfs(0x4f978e000/0x0/0x4ffc00000, data 0x25e49e7/0x26fe000, compress 0x0/0x0/0x0, omap 0x27d4f, meta 0x3d482b1), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115638272 unmapped: 16793600 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 heartbeat osd_stat(store_statfs(0x4f978e000/0x0/0x4ffc00000, data 0x25e49e7/0x26fe000, compress 0x0/0x0/0x0, omap 0x27d4f, meta 0x3d482b1), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115351552 unmapped: 17080320 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 ms_handle_reset con 0x55f51530e000 session 0x55f5130cd180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 15966208 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 15810560 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1647481 data_alloc: 234881024 data_used: 10767700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120446976 unmapped: 11984896 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 heartbeat osd_stat(store_statfs(0x4f9768000/0x0/0x4ffc00000, data 0x26099f7/0x2724000, compress 0x0/0x0/0x0, omap 0x27ed5, meta 0x3d4812b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 11952128 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 ms_handle_reset con 0x55f5146acc00 session 0x55f5122dae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.548659325s of 12.827566147s, submitted: 44
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 ms_handle_reset con 0x55f5146ad000 session 0x55f512360a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120479744 unmapped: 11952128 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 14606336 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 ms_handle_reset con 0x55f512203c00 session 0x55f514d4f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x20aa9d7/0x21c3000, compress 0x0/0x0/0x0, omap 0x283cf, meta 0x3d47c31), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117768192 unmapped: 14663680 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612738 data_alloc: 234881024 data_used: 10657092
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117768192 unmapped: 14663680 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117768192 unmapped: 14663680 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x20aa9d7/0x21c3000, compress 0x0/0x0/0x0, omap 0x283cf, meta 0x3d47c31), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117350400 unmapped: 15081472 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 15343616 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 189 ms_handle_reset con 0x55f5152ec400 session 0x55f5130ccc40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 15343616 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1621589 data_alloc: 234881024 data_used: 10669380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 190 ms_handle_reset con 0x55f51530e000 session 0x55f5122dbc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 190 ms_handle_reset con 0x55f5146ad800 session 0x55f5153456c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 190 ms_handle_reset con 0x55f512203c00 session 0x55f5122dafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 190 ms_handle_reset con 0x55f5146ad000 session 0x55f51318fc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 190 heartbeat osd_stat(store_statfs(0x4f9cba000/0x0/0x4ffc00000, data 0x20b3281/0x21d0000, compress 0x0/0x0/0x0, omap 0x28d1c, meta 0x3d472e4), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 15319040 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 190 ms_handle_reset con 0x55f5152ec400 session 0x55f51318f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 heartbeat osd_stat(store_statfs(0x4f9bd8000/0x0/0x4ffc00000, data 0x218fead/0x22b0000, compress 0x0/0x0/0x0, omap 0x28ece, meta 0x3d47132), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 ms_handle_reset con 0x55f51530e000 session 0x55f5130cd880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 ms_handle_reset con 0x55f5146ac000 session 0x55f515345c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 ms_handle_reset con 0x55f5146ac000 session 0x55f5153448c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 ms_handle_reset con 0x55f512203c00 session 0x55f512279dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 ms_handle_reset con 0x55f5146ad000 session 0x55f5143c9a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 ms_handle_reset con 0x55f5152ec400 session 0x55f511f54700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 15122432 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 ms_handle_reset con 0x55f51530e000 session 0x55f511f55180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 15106048 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.288534164s of 10.830360413s, submitted: 105
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 192 ms_handle_reset con 0x55f512203c00 session 0x55f512361c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 15106048 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 192 ms_handle_reset con 0x55f5146ac000 session 0x55f5130e7880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 192 ms_handle_reset con 0x55f5146ad000 session 0x55f511e94540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 192 heartbeat osd_stat(store_statfs(0x4f974e000/0x0/0x4ffc00000, data 0x261bacb/0x273c000, compress 0x0/0x0/0x0, omap 0x295cd, meta 0x3d46a33), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 192 ms_handle_reset con 0x55f5146ad400 session 0x55f511f548c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 15785984 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1666466 data_alloc: 234881024 data_used: 10669380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 192 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 193 ms_handle_reset con 0x55f5152ec400 session 0x55f511f55880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 193 ms_handle_reset con 0x55f5146ac800 session 0x55f5122db340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 193 ms_handle_reset con 0x55f512203c00 session 0x55f5130cd340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116670464 unmapped: 15761408 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 193 ms_handle_reset con 0x55f5146ac000 session 0x55f514d4fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 194 ms_handle_reset con 0x55f5146ad000 session 0x55f51318e540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 16392192 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 13762560 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 194 ms_handle_reset con 0x55f5152ec000 session 0x55f513135a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 194 ms_handle_reset con 0x55f512202c00 session 0x55f512236540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 194 ms_handle_reset con 0x55f512259000 session 0x55f514ed5dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 194 ms_handle_reset con 0x55f5146ad400 session 0x55f512c64380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 194 ms_handle_reset con 0x55f5146adc00 session 0x55f5131341c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118382592 unmapped: 14049280 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 195 heartbeat osd_stat(store_statfs(0x4f9746000/0x0/0x4ffc00000, data 0x2620df9/0x2744000, compress 0x0/0x0/0x0, omap 0x2a52d, meta 0x3d45ad3), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 18972672 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1544217 data_alloc: 218103808 data_used: 5604875
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 195 heartbeat osd_stat(store_statfs(0x4f9746000/0x0/0x4ffc00000, data 0x2620df9/0x2744000, compress 0x0/0x0/0x0, omap 0x2a52d, meta 0x3d45ad3), peers [0,2] op hist [0,0,0,2,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 195 ms_handle_reset con 0x55f512203c00 session 0x55f511f54380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 195 ms_handle_reset con 0x55f5146ac000 session 0x55f51087bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 24100864 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 24100864 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 24100864 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 24100864 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 195 ms_handle_reset con 0x55f512202c00 session 0x55f51318e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.758213043s of 11.237121582s, submitted: 164
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108331008 unmapped: 24100864 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481880 data_alloc: 218103808 data_used: 42197
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 196 ms_handle_reset con 0x55f5146ad400 session 0x55f511e8a380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 196 ms_handle_reset con 0x55f5146adc00 session 0x55f5122da8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108363776 unmapped: 24068096 heap: 132431872 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 196 heartbeat osd_stat(store_statfs(0x4faf19000/0x0/0x4ffc00000, data 0xe4f80e/0xf71000, compress 0x0/0x0/0x0, omap 0x2b114, meta 0x3d44eec), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108371968 unmapped: 32456704 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 197 ms_handle_reset con 0x55f5146ac800 session 0x55f514a988c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108380160 unmapped: 32448512 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 197 heartbeat osd_stat(store_statfs(0x4fa3ba000/0x0/0x4ffc00000, data 0x19ad408/0x1ad0000, compress 0x0/0x0/0x0, omap 0x2b647, meta 0x3d449b9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108388352 unmapped: 32440320 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 197 ms_handle_reset con 0x55f512202c00 session 0x55f5152c56c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 198 ms_handle_reset con 0x55f5146ac000 session 0x55f515353a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 22994944 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1831836 data_alloc: 218103808 data_used: 29909
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 199 ms_handle_reset con 0x55f5146ad400 session 0x55f511f45340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 31358976 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 31358976 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 31334400 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109494272 unmapped: 31334400 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 199 heartbeat osd_stat(store_statfs(0x4f43b6000/0x0/0x4ffc00000, data 0x79b0c66/0x7ad6000, compress 0x0/0x0/0x0, omap 0x2bc6f, meta 0x3d44391), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.483727455s of 10.013979912s, submitted: 144
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 22937600 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2169576 data_alloc: 218103808 data_used: 30522
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 31326208 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 31326208 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 31326208 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109510656 unmapped: 31318016 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 heartbeat osd_stat(store_statfs(0x4f03b3000/0x0/0x4ffc00000, data 0xb9b274c/0xbad9000, compress 0x0/0x0/0x0, omap 0x2be1d, meta 0x3d441e3), peers [0,2] op hist [0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109527040 unmapped: 31301632 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2419344 data_alloc: 218103808 data_used: 30794
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109568000 unmapped: 31260672 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 heartbeat osd_stat(store_statfs(0x4ee3b3000/0x0/0x4ffc00000, data 0xd9b274c/0xdad9000, compress 0x0/0x0/0x0, omap 0x2be1d, meta 0x3d441e3), peers [0,2] op hist [0,0,0,0,0,0,0,1,8])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f5146adc00 session 0x55f515033180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f5146ad000 session 0x55f511f44e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109772800 unmapped: 31055872 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f512202c00 session 0x55f512237880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f5146ac000 session 0x55f512c648c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f5146ad400 session 0x55f5130cc380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 heartbeat osd_stat(store_statfs(0x4ee3b3000/0x0/0x4ffc00000, data 0xd9b274c/0xdad9000, compress 0x0/0x0/0x0, omap 0x2be1d, meta 0x3d441e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 23519232 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 23494656 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.797119141s of 10.048560143s, submitted: 84
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109019136 unmapped: 31809536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2947251 data_alloc: 218103808 data_used: 30794
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 heartbeat osd_stat(store_statfs(0x4e982e000/0x0/0x4ffc00000, data 0x1253774c/0x1265e000, compress 0x0/0x0/0x0, omap 0x2c121, meta 0x3d43edf), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109092864 unmapped: 31735808 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f5146adc00 session 0x55f512360700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f514e3b000 session 0x55f511f601c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 31907840 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f512202c00 session 0x55f5123601c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109035520 unmapped: 31793152 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 ms_handle_reset con 0x55f5146ac000 session 0x55f5123608c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117555200 unmapped: 23273472 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 heartbeat osd_stat(store_statfs(0x4e402d000/0x0/0x4ffc00000, data 0x17d3776f/0x17e5f000, compress 0x0/0x0/0x0, omap 0x2c121, meta 0x3d43edf), peers [0,2] op hist [0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 22994944 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511052 data_alloc: 218103808 data_used: 3548234
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109740032 unmapped: 31088640 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 200 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 201 ms_handle_reset con 0x55f512259000 session 0x55f511f45880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 31072256 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109764608 unmapped: 31064064 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 heartbeat osd_stat(store_statfs(0x4e1028000/0x0/0x4ffc00000, data 0x1ad39362/0x1ae62000, compress 0x0/0x0/0x0, omap 0x2c82b, meta 0x3d437d5), peers [0,2] op hist [0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 ms_handle_reset con 0x55f5146adc00 session 0x55f512278e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109731840 unmapped: 31096832 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 ms_handle_reset con 0x55f515859c00 session 0x55f5130e7340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 ms_handle_reset con 0x55f512202c00 session 0x55f512360e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 heartbeat osd_stat(store_statfs(0x4fb828000/0x0/0x4ffc00000, data 0x53af67/0x664000, compress 0x0/0x0/0x0, omap 0x2cd32, meta 0x3d432ce), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 heartbeat osd_stat(store_statfs(0x4fb828000/0x0/0x4ffc00000, data 0x53af67/0x664000, compress 0x0/0x0/0x0, omap 0x2cd32, meta 0x3d432ce), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109723648 unmapped: 31105024 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546255 data_alloc: 218103808 data_used: 3552264
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109723648 unmapped: 31105024 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109723648 unmapped: 31105024 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 heartbeat osd_stat(store_statfs(0x4fb828000/0x0/0x4ffc00000, data 0x53af67/0x664000, compress 0x0/0x0/0x0, omap 0x2cd32, meta 0x3d432ce), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109723648 unmapped: 31105024 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 109723648 unmapped: 31105024 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.586087227s of 14.680438042s, submitted: 189
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fb823000/0x0/0x4ffc00000, data 0x53ca3d/0x667000, compress 0x0/0x0/0x0, omap 0x2d09c, meta 0x3d42f64), peers [0,2] op hist [0,0,0,0,0,0,0,7])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117604352 unmapped: 23224320 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1654377 data_alloc: 218103808 data_used: 4773896
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 22839296 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 22593536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 22593536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 22593536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa61e000/0x0/0x4ffc00000, data 0x1743a3d/0x186e000, compress 0x0/0x0/0x0, omap 0x2d09c, meta 0x3d42f64), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 22593536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1668483 data_alloc: 218103808 data_used: 5068808
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa61e000/0x0/0x4ffc00000, data 0x1743a3d/0x186e000, compress 0x0/0x0/0x0, omap 0x2d09c, meta 0x3d42f64), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 22593536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 22593536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 23748608 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 23748608 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f512259000 session 0x55f512279c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 23617536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1665097 data_alloc: 218103808 data_used: 5068808
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 23617536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa61b000/0x0/0x4ffc00000, data 0x1746a3d/0x1871000, compress 0x0/0x0/0x0, omap 0x2d2f0, meta 0x3d42d10), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 23617536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 23617536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa61b000/0x0/0x4ffc00000, data 0x1746a3d/0x1871000, compress 0x0/0x0/0x0, omap 0x2d2f0, meta 0x3d42d10), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 23617536 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f5146ac000 session 0x55f514351500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f5146adc00 session 0x55f512333dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f515620400 session 0x55f515353dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f512202c00 session 0x55f511e95880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.154392242s of 15.438932419s, submitted: 148
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117612544 unmapped: 23216128 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1700327 data_alloc: 218103808 data_used: 5068808
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 23519232 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f512259000 session 0x55f514a99180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f5146ac000 session 0x55f512278380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f5146adc00 session 0x55f511e94c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f5122b7400 session 0x55f511e94700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f512202c00 session 0x55f512278fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 23519232 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 23519232 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 23519232 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa2c0000/0x0/0x4ffc00000, data 0x1aa0a4d/0x1bcc000, compress 0x0/0x0/0x0, omap 0x2d2f0, meta 0x3d42d10), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 23519232 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1693527 data_alloc: 218103808 data_used: 5068824
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 23519232 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa2c0000/0x0/0x4ffc00000, data 0x1aa0a4d/0x1bcc000, compress 0x0/0x0/0x0, omap 0x2d2f0, meta 0x3d42d10), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 ms_handle_reset con 0x55f512259000 session 0x55f51487ce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 23511040 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 23511040 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa2bf000/0x0/0x4ffc00000, data 0x1aa0a70/0x1bcd000, compress 0x0/0x0/0x0, omap 0x2d2f0, meta 0x3d42d10), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 21946368 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 heartbeat osd_stat(store_statfs(0x4fa2bf000/0x0/0x4ffc00000, data 0x1aa0a70/0x1bcd000, compress 0x0/0x0/0x0, omap 0x2d2f0, meta 0x3d42d10), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 21946368 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1714672 data_alloc: 218103808 data_used: 8380952
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 21946368 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 21946368 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118882304 unmapped: 21946368 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.998425484s of 13.556021690s, submitted: 13
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 204 ms_handle_reset con 0x55f5152edc00 session 0x55f511f60700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118439936 unmapped: 22388736 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 204 heartbeat osd_stat(store_statfs(0x4fa2ba000/0x0/0x4ffc00000, data 0x1aa2663/0x1bd0000, compress 0x0/0x0/0x0, omap 0x2d613, meta 0x3d429ed), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 204 ms_handle_reset con 0x55f512c3c000 session 0x55f51226e8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 205 ms_handle_reset con 0x55f5152ed800 session 0x55f511f60380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 22257664 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1726336 data_alloc: 218103808 data_used: 8385048
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 205 ms_handle_reset con 0x55f512202c00 session 0x55f511f44380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 206 ms_handle_reset con 0x55f512c3c000 session 0x55f515352c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 206 ms_handle_reset con 0x55f512259000 session 0x55f511e94000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118587392 unmapped: 22241280 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 206 ms_handle_reset con 0x55f5152edc00 session 0x55f5123328c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 206 ms_handle_reset con 0x55f512c3c800 session 0x55f5130e7880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 22077440 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 207 ms_handle_reset con 0x55f512202c00 session 0x55f5130e6a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 207 ms_handle_reset con 0x55f512259000 session 0x55f511f456c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118784000 unmapped: 22044672 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 124878848 unmapped: 15949824 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 207 ms_handle_reset con 0x55f512c3c000 session 0x55f513134380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f9e8a000/0x0/0x4ffc00000, data 0x1ecbaf4/0x1ffd000, compress 0x0/0x0/0x0, omap 0x2e36d, meta 0x3d41c93), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 19767296 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1767874 data_alloc: 218103808 data_used: 8590433
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 207 ms_handle_reset con 0x55f512c3cc00 session 0x55f511eaa700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f9be6000/0x0/0x4ffc00000, data 0x2174af4/0x22a6000, compress 0x0/0x0/0x0, omap 0x2e36d, meta 0x3d41c93), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 122175488 unmapped: 18653184 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 208 ms_handle_reset con 0x55f5152edc00 session 0x55f512c1ae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 122183680 unmapped: 18644992 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f9be1000/0x0/0x4ffc00000, data 0x217673b/0x22a9000, compress 0x0/0x0/0x0, omap 0x2e738, meta 0x3d418c8), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 208 ms_handle_reset con 0x55f512c3d400 session 0x55f5130cdc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 208 ms_handle_reset con 0x55f5146adc00 session 0x55f515345340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 122388480 unmapped: 18440192 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.395555019s of 10.211887360s, submitted: 159
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 208 heartbeat osd_stat(store_statfs(0x4f9b60000/0x0/0x4ffc00000, data 0x21f773b/0x232a000, compress 0x0/0x0/0x0, omap 0x2e816, meta 0x3d417ea), peers [0,2] op hist [0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 208 ms_handle_reset con 0x55f5146ad400 session 0x55f5130cc1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 122388480 unmapped: 18440192 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117342208 unmapped: 23486464 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1629545 data_alloc: 218103808 data_used: 3863649
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 209 ms_handle_reset con 0x55f512202c00 session 0x55f51318f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 23412736 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 23412736 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117415936 unmapped: 23412736 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3273 syncs, 3.64 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4895 writes, 21K keys, 4895 commit groups, 1.0 writes per commit group, ingest: 11.75 MB, 0.02 MB/s#012Interval WAL: 4895 writes, 1988 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117424128 unmapped: 23404544 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 209 heartbeat osd_stat(store_statfs(0x4fb0ce000/0x0/0x4ffc00000, data 0xc8b22a/0xdbe000, compress 0x0/0x0/0x0, omap 0x2f09e, meta 0x3d40f62), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 209 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 209 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 210 ms_handle_reset con 0x55f512259000 session 0x55f5142c0380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 22765568 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1632323 data_alloc: 218103808 data_used: 3867710
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 22765568 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 22765568 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f512259000 session 0x55f5130cd6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 22765568 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.622654915s of 10.005058289s, submitted: 80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f512202c00 session 0x55f514a99a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fb0c4000/0x0/0x4ffc00000, data 0xc8e8f3/0xdc4000, compress 0x0/0x0/0x0, omap 0x2f91b, meta 0x3d406e5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 22757376 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f5171f9c00 session 0x55f5147181c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f5171f8800 session 0x55f5142c16c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f512c3d400 session 0x55f5123336c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f511a05c00 session 0x55f51087a000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f512c3d400 session 0x55f5130e6700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f5171f9800 session 0x55f511f61a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f512259000 session 0x55f5130e6c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118128640 unmapped: 22700032 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1638523 data_alloc: 218103808 data_used: 3867710
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f5171f8800 session 0x55f512279dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f5124c2c00 session 0x55f51087afc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f5124c2400 session 0x55f5150336c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 22724608 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 ms_handle_reset con 0x55f5171f9800 session 0x55f5130cda40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 heartbeat osd_stat(store_statfs(0x4fb0c8000/0x0/0x4ffc00000, data 0xc8e8f3/0xdc4000, compress 0x0/0x0/0x0, omap 0x2fab4, meta 0x3d4054c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 22724608 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 212 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f44700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 212 ms_handle_reset con 0x55f5124c2800 session 0x55f515344540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 212 ms_handle_reset con 0x55f5171f9c00 session 0x55f51318f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118161408 unmapped: 22667264 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118161408 unmapped: 22667264 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 212 ms_handle_reset con 0x55f514ab1800 session 0x55f512c1b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118169600 unmapped: 22659072 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642243 data_alloc: 218103808 data_used: 3868295
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 212 heartbeat osd_stat(store_statfs(0x4fb0c1000/0x0/0x4ffc00000, data 0xc9359c/0xdcb000, compress 0x0/0x0/0x0, omap 0x2ff33, meta 0x3d400cd), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118169600 unmapped: 22659072 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 22650880 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 213 ms_handle_reset con 0x55f5122b6c00 session 0x55f512279880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 22650880 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 213 heartbeat osd_stat(store_statfs(0x4fb0bc000/0x0/0x4ffc00000, data 0xc951ab/0xdce000, compress 0x0/0x0/0x0, omap 0x30290, meta 0x3d3fd70), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.509253502s of 10.401103020s, submitted: 93
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 213 ms_handle_reset con 0x55f5171f9800 session 0x55f513135180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 213 ms_handle_reset con 0x55f5124c2800 session 0x55f515344fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118218752 unmapped: 22609920 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 214 ms_handle_reset con 0x55f514ab1000 session 0x55f51318fc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118226944 unmapped: 22601728 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1650171 data_alloc: 218103808 data_used: 3869006
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 214 ms_handle_reset con 0x55f515621400 session 0x55f511f54540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 214 ms_handle_reset con 0x55f5124c2800 session 0x55f515345880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 214 heartbeat osd_stat(store_statfs(0x4fb0b9000/0x0/0x4ffc00000, data 0xc96c81/0xdd1000, compress 0x0/0x0/0x0, omap 0x3075d, meta 0x3d3f8a3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118030336 unmapped: 22798336 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 215 ms_handle_reset con 0x55f514ab1000 session 0x55f5130cc700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 215 ms_handle_reset con 0x55f5171f9800 session 0x55f51087b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 22781952 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 215 heartbeat osd_stat(store_statfs(0x4fb0b2000/0x0/0x4ffc00000, data 0xc98938/0xdd6000, compress 0x0/0x0/0x0, omap 0x30f05, meta 0x3d3f0fb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5142c4c00 session 0x55f51318f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5142c5000 session 0x55f5142c0a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5142c5000 session 0x55f5130e6c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f448c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5124c2800 session 0x55f51318e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 22740992 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5142c4c00 session 0x55f511e8b6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118087680 unmapped: 22740992 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5171f9800 session 0x55f5142c0380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 ms_handle_reset con 0x55f5122b6c00 session 0x55f5130cdc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 217 ms_handle_reset con 0x55f514ab1000 session 0x55f5142c0c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 217 ms_handle_reset con 0x55f5124c2800 session 0x55f5130e7dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 22708224 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1662556 data_alloc: 218103808 data_used: 3869493
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 217 ms_handle_reset con 0x55f5171f9c00 session 0x55f51087bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 22691840 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 217 heartbeat osd_stat(store_statfs(0x4fb0a8000/0x0/0x4ffc00000, data 0xca1273/0xddf000, compress 0x0/0x0/0x0, omap 0x31892, meta 0x3d3e76e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 217 ms_handle_reset con 0x55f5146ac000 session 0x55f511f54a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118136832 unmapped: 22691840 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 217 ms_handle_reset con 0x55f5122b6c00 session 0x55f512361340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 24559616 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116269056 unmapped: 24559616 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.275165558s of 11.132117271s, submitted: 152
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 23879680 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1576986 data_alloc: 218103808 data_used: 45314
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 23879680 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 23879680 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 218 heartbeat osd_stat(store_statfs(0x4fbb7b000/0x0/0x4ffc00000, data 0x1d1d4e/0x30f000, compress 0x0/0x0/0x0, omap 0x31fce, meta 0x3d3e032), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 218 ms_handle_reset con 0x55f5124c2800 session 0x55f5123336c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 218 ms_handle_reset con 0x55f514ab1000 session 0x55f512c1a700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 23863296 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 218 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 219 heartbeat osd_stat(store_statfs(0x4fbb7c000/0x0/0x4ffc00000, data 0x1d1d5e/0x310000, compress 0x0/0x0/0x0, omap 0x323e1, meta 0x3d3dc1f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 219 ms_handle_reset con 0x55f5171f9c00 session 0x55f512c1ba40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116981760 unmapped: 23846912 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 219 ms_handle_reset con 0x55f5142c5000 session 0x55f5153528c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 219 ms_handle_reset con 0x55f5142c4400 session 0x55f511f61c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 220 ms_handle_reset con 0x55f5142c4c00 session 0x55f5130cce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 220 ms_handle_reset con 0x55f5142c5000 session 0x55f512c64540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117006336 unmapped: 23822336 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1596602 data_alloc: 218103808 data_used: 45428
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 221 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f61a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 221 ms_handle_reset con 0x55f5124c2800 session 0x55f5122376c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 221 ms_handle_reset con 0x55f5122b6c00 session 0x55f511e94540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 221 ms_handle_reset con 0x55f5142c4800 session 0x55f51318fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117030912 unmapped: 23797760 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f5124c2800 session 0x55f5130e6540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f5142c4400 session 0x55f511e8bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 23773184 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f5142c4c00 session 0x55f511f55500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f5122b6c00 session 0x55f515033880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116400128 unmapped: 24428544 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 heartbeat osd_stat(store_statfs(0x4fbb69000/0x0/0x4ffc00000, data 0x1d966a/0x323000, compress 0x0/0x0/0x0, omap 0x33921, meta 0x3d3c6df), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f5142c4400 session 0x55f5122dac40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f5142c4800 session 0x55f5142c08c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 heartbeat osd_stat(store_statfs(0x4fbb69000/0x0/0x4ffc00000, data 0x1d966a/0x323000, compress 0x0/0x0/0x0, omap 0x33921, meta 0x3d3c6df), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116441088 unmapped: 24387584 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f514ab1000 session 0x55f515345340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 ms_handle_reset con 0x55f5171f9c00 session 0x55f511f44a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5142c4000 session 0x55f515345a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5142c5000 session 0x55f512c64700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 heartbeat osd_stat(store_statfs(0x4fbb69000/0x0/0x4ffc00000, data 0x1d966a/0x323000, compress 0x0/0x0/0x0, omap 0x33921, meta 0x3d3c6df), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5122b6c00 session 0x55f51318e540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5142c4400 session 0x55f511f44c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5124c2800 session 0x55f512360000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 24215552 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.191418648s of 10.484974861s, submitted: 143
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614451 data_alloc: 218103808 data_used: 46895
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5142c4000 session 0x55f5142c0fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5142c4400 session 0x55f5122da540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 ms_handle_reset con 0x55f5142c5000 session 0x55f5142c0000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 24231936 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 224 ms_handle_reset con 0x55f5122b6c00 session 0x55f515033c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 224 ms_handle_reset con 0x55f5165bc400 session 0x55f512332a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 116596736 unmapped: 24231936 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 225 ms_handle_reset con 0x55f5142c4800 session 0x55f5142c1c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115539968 unmapped: 25288704 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 226 ms_handle_reset con 0x55f514ab1000 session 0x55f512278380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 226 ms_handle_reset con 0x55f5142c4000 session 0x55f515345180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115605504 unmapped: 25223168 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 226 ms_handle_reset con 0x55f5122b6c00 session 0x55f511e95340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 226 heartbeat osd_stat(store_statfs(0x4fbb5f000/0x0/0x4ffc00000, data 0x1e0652/0x32b000, compress 0x0/0x0/0x0, omap 0x34b69, meta 0x3d3b497), peers [0,2] op hist [0,0,0,0,0,0,1,0,0,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115630080 unmapped: 25198592 heap: 140828672 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1677585 data_alloc: 218103808 data_used: 47382
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 227 ms_handle_reset con 0x55f5142c5000 session 0x55f511e94e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 227 ms_handle_reset con 0x55f5165bc000 session 0x55f511e948c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 28688384 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 228 ms_handle_reset con 0x55f5122b6c00 session 0x55f512279dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 228 ms_handle_reset con 0x55f5142c4000 session 0x55f511f61dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 228 ms_handle_reset con 0x55f5142c4400 session 0x55f511e8b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 28688384 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 228 heartbeat osd_stat(store_statfs(0x4fb37a000/0x0/0x4ffc00000, data 0x9c7838/0xb12000, compress 0x0/0x0/0x0, omap 0x353cf, meta 0x3d3ac31), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 228 ms_handle_reset con 0x55f5165bc800 session 0x55f514d4fc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 228 ms_handle_reset con 0x55f5165bcc00 session 0x55f514d4f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 28688384 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 229 ms_handle_reset con 0x55f5142c4800 session 0x55f512c1ae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 115867648 unmapped: 28639232 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 229 ms_handle_reset con 0x55f5122b6c00 session 0x55f5122da380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 229 handle_osd_map epochs [231,231], i have 229, src has [1,231]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 229 handle_osd_map epochs [230,231], i have 229, src has [1,231]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 231 ms_handle_reset con 0x55f5142c4000 session 0x55f512c59500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117022720 unmapped: 27484160 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.141198874s of 10.072068214s, submitted: 228
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1709777 data_alloc: 218103808 data_used: 46512
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 231 ms_handle_reset con 0x55f5142c4400 session 0x55f514d77880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 231 ms_handle_reset con 0x55f5122b6c00 session 0x55f515344000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 231 ms_handle_reset con 0x55f512202000 session 0x55f513134700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 27140096 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 231 heartbeat osd_stat(store_statfs(0x4fafff000/0x0/0x4ffc00000, data 0xd3ebbe/0xe8b000, compress 0x0/0x0/0x0, omap 0x35cda, meta 0x3d3a326), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 232 ms_handle_reset con 0x55f5165bc000 session 0x55f513134fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 232 ms_handle_reset con 0x55f514ab1000 session 0x55f5130e7880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117366784 unmapped: 27140096 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 232 ms_handle_reset con 0x55f512202000 session 0x55f5123328c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 232 ms_handle_reset con 0x55f5165bd000 session 0x55f512332000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 232 ms_handle_reset con 0x55f5165bcc00 session 0x55f512361500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117497856 unmapped: 27009024 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 233 ms_handle_reset con 0x55f512202000 session 0x55f512332e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117514240 unmapped: 26992640 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 26787840 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1740753 data_alloc: 218103808 data_used: 46708
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 234 ms_handle_reset con 0x55f5165bd800 session 0x55f511e95dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117719040 unmapped: 26787840 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 234 ms_handle_reset con 0x55f5122b6c00 session 0x55f513134380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 234 heartbeat osd_stat(store_statfs(0x4fa7c1000/0x0/0x4ffc00000, data 0x1578282/0x16cb000, compress 0x0/0x0/0x0, omap 0x36c66, meta 0x3d3939a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 234 ms_handle_reset con 0x55f5165bc000 session 0x55f51487da40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 234 ms_handle_reset con 0x55f5142c4800 session 0x55f511e8a1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 26705920 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 117809152 unmapped: 26697728 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 235 ms_handle_reset con 0x55f512202000 session 0x55f514d76380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 235 ms_handle_reset con 0x55f5165bd400 session 0x55f5130cc540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118865920 unmapped: 25640960 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 236 ms_handle_reset con 0x55f514ab1000 session 0x55f511f45dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 236 ms_handle_reset con 0x55f5122b6c00 session 0x55f513135dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118874112 unmapped: 25632768 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1771804 data_alloc: 218103808 data_used: 48561
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.707260132s of 10.113376617s, submitted: 100
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 25616384 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 236 ms_handle_reset con 0x55f5122b6c00 session 0x55f513134a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 236 heartbeat osd_stat(store_statfs(0x4fa7bb000/0x0/0x4ffc00000, data 0x157bc65/0x16d1000, compress 0x0/0x0/0x0, omap 0x372de, meta 0x3d38d22), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 25616384 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 237 ms_handle_reset con 0x55f512202000 session 0x55f511f45500
Jan 29 12:40:30 np0005601226 ceph-mon[75233]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 29 12:40:30 np0005601226 ceph-mon[75233]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842652727' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118923264 unmapped: 25583616 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 237 ms_handle_reset con 0x55f5165bc000 session 0x55f5147c7a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 237 ms_handle_reset con 0x55f5142c4800 session 0x55f512333340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 25575424 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 238 ms_handle_reset con 0x55f514ab1000 session 0x55f512278000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 118317056 unmapped: 26189824 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1750226 data_alloc: 218103808 data_used: 48463
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 239 ms_handle_reset con 0x55f5165bd400 session 0x55f515033dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 25051136 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 239 heartbeat osd_stat(store_statfs(0x4fac1e000/0x0/0x4ffc00000, data 0x1115374/0x126c000, compress 0x0/0x0/0x0, omap 0x3798d, meta 0x3d38673), peers [0,2] op hist [0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 239 ms_handle_reset con 0x55f5122b6c00 session 0x55f512c1aa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 239 ms_handle_reset con 0x55f512202000 session 0x55f51318fa40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 24985600 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 24985600 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 240 ms_handle_reset con 0x55f5142c4800 session 0x55f511e8a380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 240 ms_handle_reset con 0x55f5165bc000 session 0x55f5142c1a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 240 ms_handle_reset con 0x55f512202000 session 0x55f511e956c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 24969216 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 240 heartbeat osd_stat(store_statfs(0x4fbb3a000/0x0/0x4ffc00000, data 0x1f8d23/0x350000, compress 0x0/0x0/0x0, omap 0x37f7e, meta 0x3d38082), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 24969216 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1673024 data_alloc: 218103808 data_used: 48267
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 24969216 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.362075806s of 11.269544601s, submitted: 222
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 241 ms_handle_reset con 0x55f5142c4800 session 0x55f5122da000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 23920640 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 23912448 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 242 ms_handle_reset con 0x55f5165bcc00 session 0x55f511f45a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 242 ms_handle_reset con 0x55f5165bc000 session 0x55f515269180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 242 heartbeat osd_stat(store_statfs(0x4fbb31000/0x0/0x4ffc00000, data 0x1fc568/0x359000, compress 0x0/0x0/0x0, omap 0x38843, meta 0x3d377bd), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 23904256 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 243 ms_handle_reset con 0x55f5165bd400 session 0x55f5123321c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 243 handle_osd_map epochs [243,244], i have 244, src has [1,244]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 23945216 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f61500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 ms_handle_reset con 0x55f512202000 session 0x55f5142c0700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1695412 data_alloc: 218103808 data_used: 48381
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 23945216 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 ms_handle_reset con 0x55f5142c4800 session 0x55f511f55180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 ms_handle_reset con 0x55f5165bc000 session 0x55f512c1ac40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120569856 unmapped: 23937024 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 ms_handle_reset con 0x55f5165bcc00 session 0x55f511e941c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120569856 unmapped: 23937024 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 ms_handle_reset con 0x55f5122b6c00 session 0x55f511e8ae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 ms_handle_reset con 0x55f5142c4800 session 0x55f5142c01c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 23912448 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f5165bd800 session 0x55f511f55dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f5165bdc00 session 0x55f5130cc540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 heartbeat osd_stat(store_statfs(0x4fbb22000/0x0/0x4ffc00000, data 0x201998/0x368000, compress 0x0/0x0/0x0, omap 0x397b9, meta 0x3d36847), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f5165bc000 session 0x55f514d4f180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f514e3b000 session 0x55f512361880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120725504 unmapped: 23781376 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702022 data_alloc: 218103808 data_used: 48495
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f512202000 session 0x55f512c588c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120725504 unmapped: 23781376 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.753005981s of 10.048194885s, submitted: 87
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 23773184 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f512203c00 session 0x55f5130e6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f5152ec400 session 0x55f513134380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 23773184 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 ms_handle_reset con 0x55f5165bc400 session 0x55f5130e7180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 23773184 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f5122b6c00 session 0x55f512c65340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 heartbeat osd_stat(store_statfs(0x4fbb21000/0x0/0x4ffc00000, data 0x20356d/0x369000, compress 0x0/0x0/0x0, omap 0x39a94, meta 0x3d3656c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 23773184 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1702564 data_alloc: 218103808 data_used: 48511
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 23773184 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f512203c00 session 0x55f512279340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 23773184 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f512202000 session 0x55f511f54380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 heartbeat osd_stat(store_statfs(0x4fbb25000/0x0/0x4ffc00000, data 0x20354d/0x367000, compress 0x0/0x0/0x0, omap 0x39bb4, meta 0x3d3644c), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120733696 unmapped: 23773184 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f5152ec000 session 0x55f512236a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f5152ec400 session 0x55f5130cd500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f512202000 session 0x55f515345dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f512203c00 session 0x55f514718e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 ms_handle_reset con 0x55f5152ec000 session 0x55f5122dae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 246 handle_osd_map epochs [246,247], i have 247, src has [1,247]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 23101440 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 ms_handle_reset con 0x55f5122b6c00 session 0x55f515345dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 ms_handle_reset con 0x55f5165bdc00 session 0x55f5142c01c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 ms_handle_reset con 0x55f5165bdc00 session 0x55f514d4f180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 ms_handle_reset con 0x55f512202000 session 0x55f5130e6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 ms_handle_reset con 0x55f512203c00 session 0x55f512c65340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 ms_handle_reset con 0x55f5165bd800 session 0x55f511e8afc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 ms_handle_reset con 0x55f514e3b000 session 0x55f5142c16c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 23724032 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1726657 data_alloc: 218103808 data_used: 48365
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f512202000 session 0x55f511f54380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f5122b6c00 session 0x55f514718380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 23732224 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f512203c00 session 0x55f511e94700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.745348930s of 10.464756966s, submitted: 123
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f5165bd800 session 0x55f512278000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 23732224 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f5165bdc00 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f512202000 session 0x55f515032e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 23732224 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 heartbeat osd_stat(store_statfs(0x4fb984000/0x0/0x4ffc00000, data 0x3a3831/0x508000, compress 0x0/0x0/0x0, omap 0x3a84b, meta 0x3d357b5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120774656 unmapped: 23732224 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f512203c00 session 0x55f514d77880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 ms_handle_reset con 0x55f5122b6c00 session 0x55f511e8a000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 250 ms_handle_reset con 0x55f5165bd800 session 0x55f512c596c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121143296 unmapped: 23363584 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739288 data_alloc: 218103808 data_used: 48283
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 23388160 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 23257088 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 251 ms_handle_reset con 0x55f51530f000 session 0x55f5150336c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 23257088 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 251 ms_handle_reset con 0x55f512202000 session 0x55f515269340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121249792 unmapped: 23257088 heap: 144506880 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 252 ms_handle_reset con 0x55f512203c00 session 0x55f511f60540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 252 heartbeat osd_stat(store_statfs(0x4fb94a000/0x0/0x4ffc00000, data 0x3d2bfa/0x53d000, compress 0x0/0x0/0x0, omap 0x3b1d1, meta 0x3d34e2f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 252 ms_handle_reset con 0x55f5122b6c00 session 0x55f512360c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 253 ms_handle_reset con 0x55f5165bd800 session 0x55f5130e6e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 121044992 unmapped: 26615808 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1802038 data_alloc: 218103808 data_used: 1385725
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 253 ms_handle_reset con 0x55f515857400 session 0x55f514d4fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 253 ms_handle_reset con 0x55f51530e000 session 0x55f511e94fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 27074560 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 253 heartbeat osd_stat(store_statfs(0x4fb25e000/0x0/0x4ffc00000, data 0xac26ec/0xc2e000, compress 0x0/0x0/0x0, omap 0x3b6ef, meta 0x3d34911), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 27074560 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 27074560 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.234869957s of 12.034988403s, submitted: 122
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 253 ms_handle_reset con 0x55f512202000 session 0x55f515032540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 27074560 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 253 handle_osd_map epochs [253,254], i have 254, src has [1,254]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 27074560 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1835750 data_alloc: 218103808 data_used: 6246043
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 23207936 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f54e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 heartbeat osd_stat(store_statfs(0x4fac04000/0x0/0x4ffc00000, data 0x111b1c2/0x1288000, compress 0x0/0x0/0x0, omap 0x3bb8b, meta 0x3d34475), peers [0,2] op hist [0,0,0,0,0,0,12])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 20406272 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 ms_handle_reset con 0x55f5165bd800 session 0x55f512333500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 19103744 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 19103744 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 128655360 unmapped: 19005440 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 heartbeat osd_stat(store_statfs(0x4fab0a000/0x0/0x4ffc00000, data 0x12071c2/0x1374000, compress 0x0/0x0/0x0, omap 0x3bc31, meta 0x3d343cf), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1907891 data_alloc: 218103808 data_used: 8605339
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 heartbeat osd_stat(store_statfs(0x4fab0a000/0x0/0x4ffc00000, data 0x12071c2/0x1374000, compress 0x0/0x0/0x0, omap 0x3bc31, meta 0x3d343cf), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 128688128 unmapped: 18972672 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 ms_handle_reset con 0x55f511a04400 session 0x55f512279a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 127401984 unmapped: 20258816 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 127401984 unmapped: 20258816 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 ms_handle_reset con 0x55f512202000 session 0x55f511f60c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.703695297s of 10.005170822s, submitted: 139
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 20250624 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 255 ms_handle_reset con 0x55f51530e000 session 0x55f511f45180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 20250624 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 255 handle_osd_map epochs [255,256], i have 256, src has [1,256]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1912189 data_alloc: 218103808 data_used: 8609533
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 256 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f55dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 256 ms_handle_reset con 0x55f511a04400 session 0x55f511e8ae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 256 heartbeat osd_stat(store_statfs(0x4fab0b000/0x0/0x4ffc00000, data 0x120da6c/0x137f000, compress 0x0/0x0/0x0, omap 0x3c4c3, meta 0x3d33b3d), peers [0,2] op hist [0,0,0,0,0,0,0,0,16,26])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 132923392 unmapped: 14737408 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134127616 unmapped: 13533184 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 256 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 257 ms_handle_reset con 0x55f51588c400 session 0x55f515033dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 12853248 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 12845056 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 ms_handle_reset con 0x55f511a04400 session 0x55f515032700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 ms_handle_reset con 0x55f5165bd800 session 0x55f512360540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 13492224 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1976734 data_alloc: 234881024 data_used: 9337165
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 ms_handle_reset con 0x55f512202000 session 0x55f511f60c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134168576 unmapped: 13492224 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 ms_handle_reset con 0x55f5152ec000 session 0x55f5142c1500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 ms_handle_reset con 0x55f5165bc400 session 0x55f51475aa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 heartbeat osd_stat(store_statfs(0x4fa191000/0x0/0x4ffc00000, data 0x1b87252/0x1cf9000, compress 0x0/0x0/0x0, omap 0x3caef, meta 0x3d33511), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134176768 unmapped: 13484032 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134316032 unmapped: 13344768 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.794489861s of 10.005667686s, submitted: 198
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 heartbeat osd_stat(store_statfs(0x4fa173000/0x0/0x4ffc00000, data 0x1ba8242/0x1d19000, compress 0x0/0x0/0x0, omap 0x3c8d5, meta 0x3d3372b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 15777792 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 258 handle_osd_map epochs [258,259], i have 259, src has [1,259]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 16924672 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 259 heartbeat osd_stat(store_statfs(0x4faa11000/0x0/0x4ffc00000, data 0x1308242/0x1479000, compress 0x0/0x0/0x0, omap 0x3cac1, meta 0x3d3353f), peers [0,2] op hist [0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1911391 data_alloc: 218103808 data_used: 6846585
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 259 ms_handle_reset con 0x55f5165bc400 session 0x55f512c596c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 16924672 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 16924672 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 259 ms_handle_reset con 0x55f5152ec000 session 0x55f515269180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 16924672 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 16924672 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 259 heartbeat osd_stat(store_statfs(0x4faa37000/0x0/0x4ffc00000, data 0x12e4cc3/0x1455000, compress 0x0/0x0/0x0, omap 0x3d053, meta 0x3d32fad), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 260 ms_handle_reset con 0x55f5165bd800 session 0x55f51318e380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 16883712 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1908593 data_alloc: 218103808 data_used: 6717833
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 261 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f61dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 16883712 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 16883712 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 262 ms_handle_reset con 0x55f512202000 session 0x55f5130e7180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 16867328 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.809925079s of 10.022929192s, submitted: 75
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 16867328 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 262 ms_handle_reset con 0x55f51530e000 session 0x55f511f61a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 16867328 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 263 heartbeat osd_stat(store_statfs(0x4faa6e000/0x0/0x4ffc00000, data 0x12a9f9b/0x141e000, compress 0x0/0x0/0x0, omap 0x3da92, meta 0x3d3256e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1910467 data_alloc: 218103808 data_used: 6461241
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 263 ms_handle_reset con 0x55f5152ec000 session 0x55f5122da380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 16826368 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 264 ms_handle_reset con 0x55f5165bc400 session 0x55f5131348c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 264 ms_handle_reset con 0x55f5122b6c00 session 0x55f5122dae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 16793600 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 16793600 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 130867200 unmapped: 16793600 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 264 heartbeat osd_stat(store_statfs(0x4faa60000/0x0/0x4ffc00000, data 0x12b0c6f/0x1428000, compress 0x0/0x0/0x0, omap 0x3e045, meta 0x3d31fbb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 264 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 265 ms_handle_reset con 0x55f5165bd800 session 0x55f511e95500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 15753216 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 265 ms_handle_reset con 0x55f5122b6c00 session 0x55f512237180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918572 data_alloc: 218103808 data_used: 6461241
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131907584 unmapped: 15753216 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 265 heartbeat osd_stat(store_statfs(0x4faa5f000/0x0/0x4ffc00000, data 0x12b242a/0x142b000, compress 0x0/0x0/0x0, omap 0x3e1c9, meta 0x3d31e37), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 265 heartbeat osd_stat(store_statfs(0x4faa5f000/0x0/0x4ffc00000, data 0x12b242a/0x142b000, compress 0x0/0x0/0x0, omap 0x3e1c9, meta 0x3d31e37), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 16187392 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 266 ms_handle_reset con 0x55f51530e000 session 0x55f511f61340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 16187392 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 16187392 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.808337212s of 10.567838669s, submitted: 32
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 267 ms_handle_reset con 0x55f5152ec000 session 0x55f511e8ac40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 16187392 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1925572 data_alloc: 218103808 data_used: 6461241
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 267 handle_osd_map epochs [267,268], i have 268, src has [1,268]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 267 heartbeat osd_stat(store_statfs(0x4faa58000/0x0/0x4ffc00000, data 0x12b5caa/0x1432000, compress 0x0/0x0/0x0, omap 0x3e6df, meta 0x3d31921), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 16187392 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131473408 unmapped: 16187392 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 268 heartbeat osd_stat(store_statfs(0x4faa53000/0x0/0x4ffc00000, data 0x12b7780/0x1435000, compress 0x0/0x0/0x0, omap 0x3eb48, meta 0x3d314b8), peers [0,2] op hist [0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131497984 unmapped: 16162816 heap: 147660800 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 18194432 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 269 ms_handle_reset con 0x55f5165bc400 session 0x55f511f55880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 269 ms_handle_reset con 0x55f51588dc00 session 0x55f512361880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 269 ms_handle_reset con 0x55f5122b6c00 session 0x55f511e8ae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 269 ms_handle_reset con 0x55f5152ec000 session 0x55f5123616c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 269 ms_handle_reset con 0x55f51530e000 session 0x55f511f45500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 23076864 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1988664 data_alloc: 218103808 data_used: 6461241
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 23076864 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 heartbeat osd_stat(store_statfs(0x4fa153000/0x0/0x4ffc00000, data 0x1bb6080/0x1d37000, compress 0x0/0x0/0x0, omap 0x3f065, meta 0x3d30f9b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 ms_handle_reset con 0x55f5165bc400 session 0x55f512278fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 23076864 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 ms_handle_reset con 0x55f51528dc00 session 0x55f5130e6700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 heartbeat osd_stat(store_statfs(0x4fa153000/0x0/0x4ffc00000, data 0x1bb6080/0x1d37000, compress 0x0/0x0/0x0, omap 0x3f065, meta 0x3d30f9b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f44fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 ms_handle_reset con 0x55f5152ec000 session 0x55f511e94c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 23076864 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 heartbeat osd_stat(store_statfs(0x4fa153000/0x0/0x4ffc00000, data 0x1bb6080/0x1d37000, compress 0x0/0x0/0x0, omap 0x3f065, meta 0x3d30f9b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 23068672 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.131361961s of 10.226754189s, submitted: 92
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 ms_handle_reset con 0x55f511a04400 session 0x55f5153448c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 ms_handle_reset con 0x55f51530e000 session 0x55f511f44a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 23068672 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 heartbeat osd_stat(store_statfs(0x4fa154000/0x0/0x4ffc00000, data 0x1bb608f/0x1d38000, compress 0x0/0x0/0x0, omap 0x3f2ba, meta 0x3d30d46), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1994611 data_alloc: 218103808 data_used: 6461241
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131932160 unmapped: 23076864 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 270 heartbeat osd_stat(store_statfs(0x4fa154000/0x0/0x4ffc00000, data 0x1bb608f/0x1d38000, compress 0x0/0x0/0x0, omap 0x3f2ba, meta 0x3d30d46), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 271 ms_handle_reset con 0x55f512203c00 session 0x55f512237dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 23068672 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 23068672 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131940352 unmapped: 23068672 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 131956736 unmapped: 23052288 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1997813 data_alloc: 218103808 data_used: 6465609
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 20127744 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134897664 unmapped: 20111360 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 271 heartbeat osd_stat(store_statfs(0x4fb1d9000/0x0/0x4ffc00000, data 0xb29b81/0xcad000, compress 0x0/0x0/0x0, omap 0x3f724, meta 0x3d308dc), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 271 handle_osd_map epochs [272,272], i have 272, src has [1,272]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 handle_osd_map epochs [272,272], i have 272, src has [1,272]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 20094976 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 20094976 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20086784 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918031 data_alloc: 218103808 data_used: 8330825
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.711619377s of 11.126598358s, submitted: 58
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20086784 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 heartbeat osd_stat(store_statfs(0x4fb1dd000/0x0/0x4ffc00000, data 0xb2b5f5/0xcaf000, compress 0x0/0x0/0x0, omap 0x3fa97, meta 0x3d30569), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20086784 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 heartbeat osd_stat(store_statfs(0x4fb1dd000/0x0/0x4ffc00000, data 0xb2b5f5/0xcaf000, compress 0x0/0x0/0x0, omap 0x3f946, meta 0x3d306ba), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 20086784 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 ms_handle_reset con 0x55f511a04400 session 0x55f511f54e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 20070400 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 20070400 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 heartbeat osd_stat(store_statfs(0x4fb1dd000/0x0/0x4ffc00000, data 0xb2b5f5/0xcaf000, compress 0x0/0x0/0x0, omap 0x3fa3c, meta 0x3d305c4), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918907 data_alloc: 218103808 data_used: 8383975
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 136724480 unmapped: 18284544 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 heartbeat osd_stat(store_statfs(0x4fb09d000/0x0/0x4ffc00000, data 0xc6b5f5/0xdef000, compress 0x0/0x0/0x0, omap 0x3fa3c, meta 0x3d305c4), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,9])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 16072704 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137879552 unmapped: 17129472 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137248768 unmapped: 17760256 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 heartbeat osd_stat(store_statfs(0x4fab3a000/0x0/0x4ffc00000, data 0x11ce5f5/0x1352000, compress 0x0/0x0/0x0, omap 0x3fa3c, meta 0x3d305c4), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137248768 unmapped: 17760256 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1956520 data_alloc: 218103808 data_used: 8383975
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.584517479s of 10.064326286s, submitted: 63
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 heartbeat osd_stat(store_statfs(0x4fab39000/0x0/0x4ffc00000, data 0x11ce605/0x1353000, compress 0x0/0x0/0x0, omap 0x3fb6c, meta 0x3d30494), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137248768 unmapped: 17760256 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 ms_handle_reset con 0x55f5152ec000 session 0x55f512279500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137265152 unmapped: 17743872 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 272 handle_osd_map epochs [272,273], i have 273, src has [1,273]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 17686528 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137322496 unmapped: 17686528 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 273 heartbeat osd_stat(store_statfs(0x4fab1f000/0x0/0x4ffc00000, data 0x11d424c/0x135a000, compress 0x0/0x0/0x0, omap 0x40d63, meta 0x3d3f29d), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 273 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 273 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 17629184 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972026 data_alloc: 234881024 data_used: 9170407
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137428992 unmapped: 17580032 heap: 155009024 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f998d000/0x0/0x4ffc00000, data 0x11d5e5a/0x135d000, compress 0x0/0x0/0x0, omap 0x40f3c, meta 0x4ecf0c4), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,26])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 147529728 unmapped: 11149312 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f44fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 ms_handle_reset con 0x55f51530e000 session 0x55f511f45880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137986048 unmapped: 20692992 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0x1bcbaa1/0x1d54000, compress 0x0/0x0/0x0, omap 0x41376, meta 0x4ecec8a), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 ms_handle_reset con 0x55f513044400 session 0x55f5131341c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 138133504 unmapped: 20545536 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 ms_handle_reset con 0x55f511a04400 session 0x55f5122db340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 138133504 unmapped: 20545536 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2047148 data_alloc: 234881024 data_used: 9371724
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 ms_handle_reset con 0x55f5122b6c00 session 0x55f512237180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 ms_handle_reset con 0x55f515336c00 session 0x55f512c1ae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 ms_handle_reset con 0x55f5152ec000 session 0x55f512361340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.044039726s of 10.509450912s, submitted: 116
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 21962752 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 136716288 unmapped: 21962752 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 heartbeat osd_stat(store_statfs(0x4f8f96000/0x0/0x4ffc00000, data 0x1bcbb0d/0x1d56000, compress 0x0/0x0/0x0, omap 0x41322, meta 0x4ececde), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 276 ms_handle_reset con 0x55f51530e000 session 0x55f511f44000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f8f6d000/0x0/0x4ffc00000, data 0x1bf15ff/0x1d7d000, compress 0x0/0x0/0x0, omap 0x414a8, meta 0x4eceb58), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 21659648 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 276 heartbeat osd_stat(store_statfs(0x4f8f6d000/0x0/0x4ffc00000, data 0x1bf15ff/0x1d7d000, compress 0x0/0x0/0x0, omap 0x414a8, meta 0x4eceb58), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 137019392 unmapped: 21659648 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 14794752 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2116501 data_alloc: 234881024 data_used: 18846174
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 14794752 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143884288 unmapped: 14794752 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143892480 unmapped: 14786560 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f8f6a000/0x0/0x4ffc00000, data 0x1bf30d5/0x1d80000, compress 0x0/0x0/0x0, omap 0x41973, meta 0x4ece68d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 14753792 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f5152ec000 session 0x55f512333880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 14753792 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2118495 data_alloc: 234881024 data_used: 18850270
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f515336c00 session 0x55f5130cd180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f8f6c000/0x0/0x4ffc00000, data 0x1bf30d5/0x1d80000, compress 0x0/0x0/0x0, omap 0x419ba, meta 0x4ece646), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 14753792 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.454031944s of 10.749198914s, submitted: 37
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f514a2b000 session 0x55f5130e7340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 14753792 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 14753792 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f5124c2400 session 0x55f5130e7c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f8f6b000/0x0/0x4ffc00000, data 0x1bf3177/0x1d81000, compress 0x0/0x0/0x0, omap 0x4191f, meta 0x4ece6e1), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 144056320 unmapped: 14622720 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f514a89000 session 0x55f5142c0000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f5124c2400 session 0x55f511f60c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f514a2b000 session 0x55f512c588c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f5152ec000 session 0x55f5142c0fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 8945664 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2188295 data_alloc: 234881024 data_used: 20083438
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 ms_handle_reset con 0x55f5124c3800 session 0x55f5130cce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 147939328 unmapped: 10739712 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f515336c00 session 0x55f514d4f180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f5165bc400 session 0x55f5130cd500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f514a30000 session 0x55f512c65180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f856b000/0x0/0x4ffc00000, data 0x25efd6a/0x277f000, compress 0x0/0x0/0x0, omap 0x4210d, meta 0x4ecdef3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f515336c00 session 0x55f511f54e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141869056 unmapped: 16809984 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f5124c2400 session 0x55f5130ccfc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f5124c3800 session 0x55f515268700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 16801792 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f5124c3800 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f5124c2400 session 0x55f511f61340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 16801792 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f514a30000 session 0x55f5130cd880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141959168 unmapped: 16719872 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2048103 data_alloc: 234881024 data_used: 11539084
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f9518000/0x0/0x4ffc00000, data 0x1645d08/0x17d4000, compress 0x0/0x0/0x0, omap 0x42558, meta 0x4ecdaa8), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f515336c00 session 0x55f5142c1c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 ms_handle_reset con 0x55f5165bc400 session 0x55f5142c16c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 16711680 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 heartbeat osd_stat(store_statfs(0x4f9518000/0x0/0x4ffc00000, data 0x1645c76/0x17d4000, compress 0x0/0x0/0x0, omap 0x42674, meta 0x4ecd98c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 ms_handle_reset con 0x55f5124c3800 session 0x55f5130cdc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 ms_handle_reset con 0x55f5124c2400 session 0x55f5142c0c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142114816 unmapped: 16564224 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.130739212s of 10.906652451s, submitted: 247
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 ms_handle_reset con 0x55f514a30000 session 0x55f512278fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 ms_handle_reset con 0x55f515336c00 session 0x55f514d4f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 16556032 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 ms_handle_reset con 0x55f514a2b000 session 0x55f5142c16c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 ms_handle_reset con 0x55f514a2b000 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 16556032 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 16556032 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2052022 data_alloc: 234881024 data_used: 11539100
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 heartbeat osd_stat(store_statfs(0x4f94f3000/0x0/0x4ffc00000, data 0x1668859/0x17f7000, compress 0x0/0x0/0x0, omap 0x433af, meta 0x4eccc51), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142123008 unmapped: 16556032 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142655488 unmapped: 16023552 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142655488 unmapped: 16023552 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 ms_handle_reset con 0x55f5124c2400 session 0x55f5142c0fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142655488 unmapped: 16023552 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 280 ms_handle_reset con 0x55f5124c3800 session 0x55f516add500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142655488 unmapped: 16023552 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2068996 data_alloc: 234881024 data_used: 11551388
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142655488 unmapped: 16023552 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 280 heartbeat osd_stat(store_statfs(0x4f92d5000/0x0/0x4ffc00000, data 0x18836ae/0x1a15000, compress 0x0/0x0/0x0, omap 0x434a1, meta 0x4eccb5f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 280 ms_handle_reset con 0x55f514a30000 session 0x55f516adc8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 280 ms_handle_reset con 0x55f515336c00 session 0x55f512435180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142663680 unmapped: 16015360 heap: 158679040 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 281 ms_handle_reset con 0x55f515336c00 session 0x55f514a70000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795026779s of 10.238007545s, submitted: 64
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160071680 unmapped: 11845632 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 281 ms_handle_reset con 0x55f5124c3800 session 0x55f5123616c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 281 ms_handle_reset con 0x55f514a2b000 session 0x55f516adc1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 281 ms_handle_reset con 0x55f5124c2400 session 0x55f514a71c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153698304 unmapped: 18219008 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 281 ms_handle_reset con 0x55f514a30000 session 0x55f514a71340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 282 ms_handle_reset con 0x55f5124c3800 session 0x55f514d4fa40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152715264 unmapped: 19202048 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c2400 session 0x55f511f55500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2192985 data_alloc: 234881024 data_used: 15553196
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 heartbeat osd_stat(store_statfs(0x4f8506000/0x0/0x4ffc00000, data 0x264c01e/0x27e4000, compress 0x0/0x0/0x0, omap 0x43c7e, meta 0x4ecc382), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f514a2b000 session 0x55f516adddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f515336c00 session 0x55f512236a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5169ab000 session 0x55f512333dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5169aac00 session 0x55f512c7c700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5152ec000 session 0x55f512c7d500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c3800 session 0x55f512c7cfc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f514a2b000 session 0x55f516add880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 19079168 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f515336c00 session 0x55f512c7d880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c2400 session 0x55f512237dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149372928 unmapped: 22544384 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c3800 session 0x55f516addc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f515336c00 session 0x55f519482000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f514a2b000 session 0x55f519482fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5152ec000 session 0x55f512435dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5169aac00 session 0x55f516adcc40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 22503424 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c2400 session 0x55f512c58000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f514a2b000 session 0x55f516adc700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c3800 session 0x55f512c1a700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 22503424 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 22503424 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 heartbeat osd_stat(store_statfs(0x4f83f1000/0x0/0x4ffc00000, data 0x264dbdb/0x27e7000, compress 0x0/0x0/0x0, omap 0x443df, meta 0x4ecbc21), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2190328 data_alloc: 234881024 data_used: 15553294
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5169aa400 session 0x55f514a70a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c2400 session 0x55f512c1a700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f5124c3800 session 0x55f516adcc40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 22503424 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 ms_handle_reset con 0x55f51530f800 session 0x55f519482fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 ms_handle_reset con 0x55f515336c00 session 0x55f514a708c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 ms_handle_reset con 0x55f5169ab000 session 0x55f512434000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 ms_handle_reset con 0x55f514a2b000 session 0x55f516add880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 ms_handle_reset con 0x55f5124c2400 session 0x55f512c7d500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 22503424 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 ms_handle_reset con 0x55f5124c3800 session 0x55f516adddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 ms_handle_reset con 0x55f51530f800 session 0x55f514a71c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 heartbeat osd_stat(store_statfs(0x4f84ff000/0x0/0x4ffc00000, data 0x264f7fa/0x27eb000, compress 0x0/0x0/0x0, omap 0x44561, meta 0x4ecba9f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 ms_handle_reset con 0x55f5169aa800 session 0x55f512c7c540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 22495232 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.588574409s of 10.848504066s, submitted: 133
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 ms_handle_reset con 0x55f51530f800 session 0x55f514a70000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 22487040 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 ms_handle_reset con 0x55f5124c2800 session 0x55f516add500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 ms_handle_reset con 0x55f5124c2000 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 22487040 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2202986 data_alloc: 234881024 data_used: 15553310
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149430272 unmapped: 22487040 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 ms_handle_reset con 0x55f5124c3800 session 0x55f512435c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 ms_handle_reset con 0x55f5124c2400 session 0x55f512278380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f84fa000/0x0/0x4ffc00000, data 0x26514cd/0x27f0000, compress 0x0/0x0/0x0, omap 0x458a1, meta 0x4eca75f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 ms_handle_reset con 0x55f5124c2000 session 0x55f519482e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 ms_handle_reset con 0x55f5124c2400 session 0x55f512435dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 24297472 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 ms_handle_reset con 0x55f514a2b000 session 0x55f5123616c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 ms_handle_reset con 0x55f5124c2800 session 0x55f519468fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 ms_handle_reset con 0x55f5124c3800 session 0x55f519469500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 147644416 unmapped: 24272896 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 ms_handle_reset con 0x55f5124c2400 session 0x55f5130e6700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 ms_handle_reset con 0x55f5124c2800 session 0x55f519469c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 ms_handle_reset con 0x55f514a2b000 session 0x55f511f54fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 ms_handle_reset con 0x55f51530f800 session 0x55f514a71500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 150233088 unmapped: 21684224 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151445504 unmapped: 20471808 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2253310 data_alloc: 234881024 data_used: 23418240
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151445504 unmapped: 20471808 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 heartbeat osd_stat(store_statfs(0x4f84d8000/0x0/0x4ffc00000, data 0x267705c/0x2814000, compress 0x0/0x0/0x0, omap 0x45e05, meta 0x4eca1fb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151453696 unmapped: 20463616 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 ms_handle_reset con 0x55f515336c00 session 0x55f5152c4fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 ms_handle_reset con 0x55f5124c2400 session 0x55f514350c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151601152 unmapped: 20316160 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.847242355s of 10.311746597s, submitted: 82
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 ms_handle_reset con 0x55f5124c2800 session 0x55f5147181c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 heartbeat osd_stat(store_statfs(0x4f84d3000/0x0/0x4ffc00000, data 0x2678b4e/0x2817000, compress 0x0/0x0/0x0, omap 0x46301, meta 0x4ec9cff), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151617536 unmapped: 20299776 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151617536 unmapped: 20299776 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2256932 data_alloc: 234881024 data_used: 23418853
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151617536 unmapped: 20299776 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 ms_handle_reset con 0x55f514a2b000 session 0x55f5147196c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 ms_handle_reset con 0x55f51530f800 session 0x55f519468540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 ms_handle_reset con 0x55f5169aac00 session 0x55f514718000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155009024 unmapped: 16908288 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2400 session 0x55f5152c41c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2800 session 0x55f516adc8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x2678b77/0x2818000, compress 0x0/0x0/0x0, omap 0x467b8, meta 0x4ec9848), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151863296 unmapped: 20054016 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f514a2b000 session 0x55f514719180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155066368 unmapped: 16850944 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f7dd0000/0x0/0x4ffc00000, data 0x2d7a696/0x2f1c000, compress 0x0/0x0/0x0, omap 0x469c8, meta 0x4ec9638), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f51530f800 session 0x55f5152c56c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155484160 unmapped: 16433152 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5169ab400 session 0x55f514bfb180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2342438 data_alloc: 234881024 data_used: 24053493
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2400 session 0x55f5194681c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2800 session 0x55f5194688c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f514a2b000 session 0x55f519483a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f51530f800 session 0x55f5152c4700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159506432 unmapped: 12410880 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f515293400 session 0x55f519483500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f514a8ec00 session 0x55f5152c5c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2800 session 0x55f515345180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f514a2b000 session 0x55f512c1b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f51530f800 session 0x55f511f55dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f7a0e000/0x0/0x4ffc00000, data 0x313c686/0x32dd000, compress 0x0/0x0/0x0, omap 0x46a56, meta 0x4ec95aa), peers [0,2] op hist [1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f515293c00 session 0x55f511eaba40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2400 session 0x55f51226f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f77fc000/0x0/0x4ffc00000, data 0x334e634/0x34ef000, compress 0x0/0x0/0x0, omap 0x46fdf, meta 0x4ec9021), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 18259968 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 18259968 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f77fc000/0x0/0x4ffc00000, data 0x334e634/0x34ef000, compress 0x0/0x0/0x0, omap 0x46fdf, meta 0x4ec9021), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2800 session 0x55f515353880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f77fc000/0x0/0x4ffc00000, data 0x334e634/0x34ef000, compress 0x0/0x0/0x0, omap 0x46fdf, meta 0x4ec9021), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153223168 unmapped: 18694144 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.814215660s of 10.594959259s, submitted: 127
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 163708928 unmapped: 8208384 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2426014 data_alloc: 251658240 data_used: 32621731
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 163708928 unmapped: 8208384 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f514a2b000 session 0x55f51475b6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f514a8ec00 session 0x55f5143c9a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160415744 unmapped: 11501568 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f51530f800 session 0x55f514a99dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 11894784 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 11894784 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f80a3000/0x0/0x4ffc00000, data 0x2aa9624/0x2c49000, compress 0x0/0x0/0x0, omap 0x47712, meta 0x4ec88ee), peers [0,2] op hist [0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 ms_handle_reset con 0x55f5124c2400 session 0x55f515269dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 11894784 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2322572 data_alloc: 234881024 data_used: 24426643
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 289 ms_handle_reset con 0x55f51530f800 session 0x55f51475b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 11894784 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 289 ms_handle_reset con 0x55f5124c2800 session 0x55f514a70000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 11894784 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 290 ms_handle_reset con 0x55f51515cc00 session 0x55f511eab6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 290 ms_handle_reset con 0x55f514a2b000 session 0x55f515352000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161218560 unmapped: 10698752 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 291 ms_handle_reset con 0x55f514a2b000 session 0x55f511eaa540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161218560 unmapped: 10698752 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.613375664s of 10.003494263s, submitted: 98
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 291 heartbeat osd_stat(store_statfs(0x4f8093000/0x0/0x4ffc00000, data 0x2ab0a6d/0x2c54000, compress 0x0/0x0/0x0, omap 0x4838d, meta 0x4ec7c73), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160890880 unmapped: 11026432 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2332260 data_alloc: 234881024 data_used: 24426643
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 291 handle_osd_map epochs [292,292], i have 292, src has [1,292]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 292 ms_handle_reset con 0x55f5124c2400 session 0x55f5123616c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160956416 unmapped: 10960896 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 292 heartbeat osd_stat(store_statfs(0x4f8093000/0x0/0x4ffc00000, data 0x2ab26b4/0x2c57000, compress 0x0/0x0/0x0, omap 0x48815, meta 0x4ec77eb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 292 ms_handle_reset con 0x55f5124c2800 session 0x55f5152c5dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160989184 unmapped: 10928128 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 ms_handle_reset con 0x55f51530f800 session 0x55f515269500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161128448 unmapped: 10788864 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 heartbeat osd_stat(store_statfs(0x4f808e000/0x0/0x4ffc00000, data 0x2ab41a6/0x2c5a000, compress 0x0/0x0/0x0, omap 0x48d8c, meta 0x4ec7274), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161161216 unmapped: 10756096 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 ms_handle_reset con 0x55f51515cc00 session 0x55f514d4f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 ms_handle_reset con 0x55f515159400 session 0x55f515268c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 ms_handle_reset con 0x55f51515cc00 session 0x55f515345880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161169408 unmapped: 10747904 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2335117 data_alloc: 234881024 data_used: 24426643
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 ms_handle_reset con 0x55f5124c2400 session 0x55f511f55340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 heartbeat osd_stat(store_statfs(0x4f8092000/0x0/0x4ffc00000, data 0x2ab41a6/0x2c5a000, compress 0x0/0x0/0x0, omap 0x4911a, meta 0x4ec6ee6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161185792 unmapped: 10731520 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 ms_handle_reset con 0x55f5124c2800 session 0x55f51475aa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 ms_handle_reset con 0x55f514a2b000 session 0x55f511e948c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 293 handle_osd_map epochs [293,294], i have 294, src has [1,294]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 11460608 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 11460608 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 11460608 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 294 ms_handle_reset con 0x55f514a2b000 session 0x55f515345880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 11460608 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f808d000/0x0/0x4ffc00000, data 0x2ab5cee/0x2c5f000, compress 0x0/0x0/0x0, omap 0x492d9, meta 0x4ec6d27), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2341426 data_alloc: 234881024 data_used: 24426643
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.639679909s of 11.488092422s, submitted: 123
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 11460608 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 294 ms_handle_reset con 0x55f5124c2400 session 0x55f514719180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160481280 unmapped: 11436032 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 294 ms_handle_reset con 0x55f5152ec400 session 0x55f511eaae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 295 ms_handle_reset con 0x55f5152ec000 session 0x55f51487c8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162004992 unmapped: 9912320 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 296 ms_handle_reset con 0x55f514a31c00 session 0x55f5194688c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 296 ms_handle_reset con 0x55f5152ec800 session 0x55f5152c4700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 296 ms_handle_reset con 0x55f514a31c00 session 0x55f519483880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 296 ms_handle_reset con 0x55f515158800 session 0x55f5152c4540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162013184 unmapped: 9904128 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 296 heartbeat osd_stat(store_statfs(0x4f8080000/0x0/0x4ffc00000, data 0x2ab96fd/0x2c68000, compress 0x0/0x0/0x0, omap 0x49ccb, meta 0x4ec6335), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f514a2b000 session 0x55f5152c5a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f5152ec000 session 0x55f5147c7a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f514a30400 session 0x55f511e95dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f5124c2400 session 0x55f51487da40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f8079000/0x0/0x4ffc00000, data 0x2abc890/0x2c6d000, compress 0x0/0x0/0x0, omap 0x4a176, meta 0x4ec5e8a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162136064 unmapped: 9781248 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2366090 data_alloc: 234881024 data_used: 26576743
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f514a2b000 session 0x55f515352380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f5124c3800 session 0x55f516adc1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f5124c2000 session 0x55f512279880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f514a31c00 session 0x55f51318e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f5124c2400 session 0x55f515268a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f514a30400 session 0x55f512c58a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162078720 unmapped: 9838592 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f515158800 session 0x55f511f44700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f514a2b000 session 0x55f514ed5500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f807f000/0x0/0x4ffc00000, data 0x2abc83e/0x2c6d000, compress 0x0/0x0/0x0, omap 0x4a202, meta 0x4ec5dfe), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 ms_handle_reset con 0x55f514a2b000 session 0x55f5147c6a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162103296 unmapped: 9814016 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 298 ms_handle_reset con 0x55f514a31c00 session 0x55f5143516c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 298 ms_handle_reset con 0x55f5124c2400 session 0x55f512435dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 298 ms_handle_reset con 0x55f515158800 session 0x55f5147c68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162127872 unmapped: 9789440 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 298 ms_handle_reset con 0x55f5152ec800 session 0x55f512332a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 298 ms_handle_reset con 0x55f5124c2400 session 0x55f515352540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 299 ms_handle_reset con 0x55f514a31c00 session 0x55f519483500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 299 ms_handle_reset con 0x55f515158800 session 0x55f515268fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 299 ms_handle_reset con 0x55f5152ec400 session 0x55f5122376c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162209792 unmapped: 9707520 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 299 ms_handle_reset con 0x55f514a30000 session 0x55f512434000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 299 ms_handle_reset con 0x55f514a30000 session 0x55f51475bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 300 ms_handle_reset con 0x55f514a2b000 session 0x55f512c1b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 300 ms_handle_reset con 0x55f514a30400 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f84c7000/0x0/0x4ffc00000, data 0x266b80b/0x2820000, compress 0x0/0x0/0x0, omap 0x4b400, meta 0x4ec4c00), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 162234368 unmapped: 9682944 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2345512 data_alloc: 234881024 data_used: 26197977
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 301 ms_handle_reset con 0x55f515158800 session 0x55f51487c540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.864103317s of 10.000426292s, submitted: 232
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 301 ms_handle_reset con 0x55f5152ec400 session 0x55f516adddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 301 ms_handle_reset con 0x55f514a31c00 session 0x55f5147c68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 301 ms_handle_reset con 0x55f5124c2400 session 0x55f51487c380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156344320 unmapped: 15572992 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 301 ms_handle_reset con 0x55f5152ec400 session 0x55f519482fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 302 ms_handle_reset con 0x55f514a2b000 session 0x55f5153448c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f514a30000 session 0x55f515352540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f514a30400 session 0x55f5152c5c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156409856 unmapped: 15507456 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f5124c2400 session 0x55f51487ddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156409856 unmapped: 15507456 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f514a2b000 session 0x55f512435180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f514a31c00 session 0x55f515353880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 heartbeat osd_stat(store_statfs(0x4f9286000/0x0/0x4ffc00000, data 0x18aec5e/0x1a62000, compress 0x0/0x0/0x0, omap 0x4c287, meta 0x4ec3d79), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f5122b6c00 session 0x55f511f54c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f511a04400 session 0x55f514718380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f514a2b000 session 0x55f512c59880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156409856 unmapped: 15507456 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 ms_handle_reset con 0x55f514a30400 session 0x55f5147c6700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 28237824 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 304 ms_handle_reset con 0x55f5152ec400 session 0x55f51226e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2042137 data_alloc: 218103808 data_used: 2173176
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f515158800 session 0x55f51487d6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f514a31c00 session 0x55f511eaac40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143736832 unmapped: 28180480 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f514a2b000 session 0x55f515268c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f5124c2400 session 0x55f511eab500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f511a04400 session 0x55f512c59340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 heartbeat osd_stat(store_statfs(0x4fa2fc000/0x0/0x4ffc00000, data 0x835b47/0x9ea000, compress 0x0/0x0/0x0, omap 0x4cc87, meta 0x4ec3379), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f514a30400 session 0x55f514a98fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 28155904 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f511a04400 session 0x55f511eab880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 ms_handle_reset con 0x55f514a2b000 session 0x55f516add880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141680640 unmapped: 30236672 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 306 ms_handle_reset con 0x55f5152ec400 session 0x55f5152696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 307 ms_handle_reset con 0x55f514a31c00 session 0x55f515353dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 30228480 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 307 ms_handle_reset con 0x55f5124c2400 session 0x55f5130e6700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 30228480 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2036985 data_alloc: 218103808 data_used: 80344
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 307 ms_handle_reset con 0x55f511a04400 session 0x55f5147c6380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141688832 unmapped: 30228480 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.193322182s of 10.835261345s, submitted: 136
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 307 ms_handle_reset con 0x55f514a31c00 session 0x55f519469500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 307 ms_handle_reset con 0x55f5152ec400 session 0x55f51487d880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 308 ms_handle_reset con 0x55f5165bd800 session 0x55f512236540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 308 ms_handle_reset con 0x55f5165bc400 session 0x55f515353a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141778944 unmapped: 30138368 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 308 heartbeat osd_stat(store_statfs(0x4fa4fe000/0x0/0x4ffc00000, data 0x6381f5/0x7ee000, compress 0x0/0x0/0x0, omap 0x4d5a6, meta 0x4ec2a5a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 308 ms_handle_reset con 0x55f511a04400 session 0x55f511eaa8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 308 ms_handle_reset con 0x55f514a31c00 session 0x55f515352a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 308 ms_handle_reset con 0x55f5152ec400 session 0x55f515353c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 ms_handle_reset con 0x55f512259800 session 0x55f515032700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 ms_handle_reset con 0x55f515850800 session 0x55f5152696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 ms_handle_reset con 0x55f5165bd800 session 0x55f512237880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 ms_handle_reset con 0x55f514a2b000 session 0x55f511e948c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140722176 unmapped: 31195136 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 ms_handle_reset con 0x55f515850800 session 0x55f514719180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140730368 unmapped: 31186944 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 ms_handle_reset con 0x55f511a04400 session 0x55f5147c7180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 ms_handle_reset con 0x55f514a31c00 session 0x55f5147c6380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 309 handle_osd_map epochs [309,310], i have 310, src has [1,310]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 310 ms_handle_reset con 0x55f514a2b000 session 0x55f516d29880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140746752 unmapped: 31170560 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2055903 data_alloc: 218103808 data_used: 80344
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 310 ms_handle_reset con 0x55f515850800 session 0x55f511f55dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 310 ms_handle_reset con 0x55f5165bd800 session 0x55f5143c9a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 311 ms_handle_reset con 0x55f511a04400 session 0x55f515268a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140754944 unmapped: 31162368 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 311 ms_handle_reset con 0x55f5152ec400 session 0x55f5130e7dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 311 ms_handle_reset con 0x55f512259800 session 0x55f51487ce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 311 ms_handle_reset con 0x55f511a04400 session 0x55f514a99a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 311 ms_handle_reset con 0x55f514a2b000 session 0x55f5130e6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 312 ms_handle_reset con 0x55f5152ec400 session 0x55f51475b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 312 ms_handle_reset con 0x55f515850800 session 0x55f51487d500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140804096 unmapped: 31113216 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 ms_handle_reset con 0x55f511a04400 session 0x55f514a996c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 ms_handle_reset con 0x55f512259800 session 0x55f519468e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 ms_handle_reset con 0x55f5152ec400 session 0x55f515353500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 heartbeat osd_stat(store_statfs(0x4fa4ea000/0x0/0x4ffc00000, data 0x641423/0x7ff000, compress 0x0/0x0/0x0, omap 0x4f065, meta 0x4ec0f9b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 31096832 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 ms_handle_reset con 0x55f514a2b000 session 0x55f515344540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 heartbeat osd_stat(store_statfs(0x4fa4ea000/0x0/0x4ffc00000, data 0x641423/0x7ff000, compress 0x0/0x0/0x0, omap 0x4f065, meta 0x4ec0f9b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 31096832 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 ms_handle_reset con 0x55f5165bd800 session 0x55f512c58380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 ms_handle_reset con 0x55f511a04400 session 0x55f515268000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 31088640 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 314 ms_handle_reset con 0x55f515850800 session 0x55f511f44a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2064373 data_alloc: 218103808 data_used: 81029
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 31088640 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 314 heartbeat osd_stat(store_statfs(0x4fa4e8000/0x0/0x4ffc00000, data 0x64474e/0x802000, compress 0x0/0x0/0x0, omap 0x4fc66, meta 0x4ec039a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.748184204s of 10.167839050s, submitted: 206
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 315 heartbeat osd_stat(store_statfs(0x4fa4e8000/0x0/0x4ffc00000, data 0x64474e/0x802000, compress 0x0/0x0/0x0, omap 0x4fc66, meta 0x4ec039a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 31088640 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 315 ms_handle_reset con 0x55f512259800 session 0x55f514bfa540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 315 ms_handle_reset con 0x55f514a2b000 session 0x55f515353dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140828672 unmapped: 31088640 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140845056 unmapped: 31072256 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 316 heartbeat osd_stat(store_statfs(0x4fa4e5000/0x0/0x4ffc00000, data 0x647a7b/0x805000, compress 0x0/0x0/0x0, omap 0x4fe71, meta 0x4ec018f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140853248 unmapped: 31064064 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2071443 data_alloc: 218103808 data_used: 82129
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140861440 unmapped: 31055872 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 319 ms_handle_reset con 0x55f5152ec400 session 0x55f514a98a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 140869632 unmapped: 31047680 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 319 ms_handle_reset con 0x55f511a04400 session 0x55f512361880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 320 ms_handle_reset con 0x55f512259800 session 0x55f5153521c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141942784 unmapped: 29974528 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 320 ms_handle_reset con 0x55f514a2b000 session 0x55f515269180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141959168 unmapped: 29958144 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 320 heartbeat osd_stat(store_statfs(0x4fa4de000/0x0/0x4ffc00000, data 0x64eb33/0x80e000, compress 0x0/0x0/0x0, omap 0x50c5a, meta 0x4ebf3a6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 320 heartbeat osd_stat(store_statfs(0x4fa4de000/0x0/0x4ffc00000, data 0x64eb33/0x80e000, compress 0x0/0x0/0x0, omap 0x50c5a, meta 0x4ebf3a6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 29949952 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 321 ms_handle_reset con 0x55f515850800 session 0x55f5147196c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2078161 data_alloc: 218103808 data_used: 82640
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 321 ms_handle_reset con 0x55f51584f400 session 0x55f512435c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141967360 unmapped: 29949952 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.564617157s of 10.004003525s, submitted: 185
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141975552 unmapped: 29941760 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 322 ms_handle_reset con 0x55f511a04400 session 0x55f514a988c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141975552 unmapped: 29941760 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141975552 unmapped: 29941760 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 322 heartbeat osd_stat(store_statfs(0x4fa4d7000/0x0/0x4ffc00000, data 0x65226e/0x813000, compress 0x0/0x0/0x0, omap 0x517c7, meta 0x4ebe839), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 322 ms_handle_reset con 0x55f514a2b000 session 0x55f512c59180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 141975552 unmapped: 29941760 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 322 handle_osd_map epochs [322,323], i have 323, src has [1,323]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 323 heartbeat osd_stat(store_statfs(0x4fa4d7000/0x0/0x4ffc00000, data 0x65226e/0x813000, compress 0x0/0x0/0x0, omap 0x517c7, meta 0x4ebe839), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2088023 data_alloc: 218103808 data_used: 82542
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 323 ms_handle_reset con 0x55f515850800 session 0x55f514718380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 323 ms_handle_reset con 0x55f515853c00 session 0x55f511f44700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 323 ms_handle_reset con 0x55f515163c00 session 0x55f514a99180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142024704 unmapped: 29892608 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 324 ms_handle_reset con 0x55f514a8f400 session 0x55f514ed5340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 324 ms_handle_reset con 0x55f511a04400 session 0x55f514bfb340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 324 ms_handle_reset con 0x55f512259800 session 0x55f519483500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f514a2b000 session 0x55f51487da40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f515850800 session 0x55f515352e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142049280 unmapped: 29868032 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142049280 unmapped: 29868032 heap: 171917312 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 heartbeat osd_stat(store_statfs(0x4fa4c4000/0x0/0x4ffc00000, data 0x6580a0/0x822000, compress 0x0/0x0/0x0, omap 0x52014, meta 0x4ebdfec), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f80ca000/0x0/0x4ffc00000, data 0x2a580a0/0x2c22000, compress 0x0/0x0/0x0, omap 0x52014, meta 0x4ebdfec), peers [0,2] op hist [0,0,0,0,0,0,0,1,5,3])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f512259800 session 0x55f512435dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142196736 unmapped: 67534848 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f514a2b000 session 0x55f511f54c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f515853c00 session 0x55f519482e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f514a8f400 session 0x55f5131348c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f515852c00 session 0x55f5152c4380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 150667264 unmapped: 59064320 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f514e3bc00 session 0x55f516add880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2661209 data_alloc: 218103808 data_used: 82858
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f514a2b000 session 0x55f511eabdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f512259800 session 0x55f512434000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 146694144 unmapped: 63037440 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f515853c00 session 0x55f511eaa540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 ms_handle_reset con 0x55f514a8f400 session 0x55f512236000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.546932220s of 10.004483223s, submitted: 168
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151019520 unmapped: 58712064 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155303936 unmapped: 54427648 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 heartbeat osd_stat(store_statfs(0x4ef8fe000/0x0/0x4ffc00000, data 0xb222102/0xb3ed000, compress 0x0/0x0/0x0, omap 0x52f7c, meta 0x4ebd084), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 142802944 unmapped: 66928640 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 326 ms_handle_reset con 0x55f512259800 session 0x55f5194688c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 147087360 unmapped: 62644224 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551312 data_alloc: 218103808 data_used: 84056
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 147185664 unmapped: 62545920 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 326 ms_handle_reset con 0x55f514e3bc00 session 0x55f512236fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143327232 unmapped: 66404352 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 327 heartbeat osd_stat(store_statfs(0x4e38f7000/0x0/0x4ffc00000, data 0x172255f9/0x173f3000, compress 0x0/0x0/0x0, omap 0x5351c, meta 0x4ebcae4), peers [0,2] op hist [0,0,0,0,1,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 327 ms_handle_reset con 0x55f515853c00 session 0x55f5152c4a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 327 ms_handle_reset con 0x55f515852800 session 0x55f511f54a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 327 ms_handle_reset con 0x55f511a04400 session 0x55f51957b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 328 ms_handle_reset con 0x55f51515ec00 session 0x55f519469a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 328 ms_handle_reset con 0x55f512259800 session 0x55f519469880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 328 ms_handle_reset con 0x55f514a2b000 session 0x55f515345180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 66060288 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f514e3bc00 session 0x55f516adc8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f515852800 session 0x55f512236000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 66052096 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f512259800 session 0x55f512434000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 heartbeat osd_stat(store_statfs(0x4e2dec000/0x0/0x4ffc00000, data 0x17d28fbf/0x17ef9000, compress 0x0/0x0/0x0, omap 0x53d05, meta 0x4ebc2fb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f514a2b000 session 0x55f511f54c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 66052096 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f514e3bc00 session 0x55f514bfb340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4137870 data_alloc: 218103808 data_used: 88746
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f51515ec00 session 0x55f5147196c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f515853c00 session 0x55f51957b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f512259800 session 0x55f5147c7880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f514a2b000 session 0x55f51487c8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 65970176 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f51515ec00 session 0x55f512c65340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f515165000 session 0x55f515268380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 ms_handle_reset con 0x55f514a2a000 session 0x55f511f54a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.065978050s of 10.001436234s, submitted: 177
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 330 ms_handle_reset con 0x55f514a2a000 session 0x55f515268540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 144433152 unmapped: 65298432 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 330 ms_handle_reset con 0x55f51528fc00 session 0x55f5130e6fc0
Jan 29 12:40:30 np0005601226 ceph-mgr[75527]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 271 MiB data, 633 MiB used, 59 GiB / 60 GiB avail
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 ms_handle_reset con 0x55f512259800 session 0x55f5143c81c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 ms_handle_reset con 0x55f512247400 session 0x55f512c59880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 ms_handle_reset con 0x55f514c92800 session 0x55f514ed5dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 ms_handle_reset con 0x55f512259800 session 0x55f514bfae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 ms_handle_reset con 0x55f512247400 session 0x55f51487c380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 ms_handle_reset con 0x55f514e3bc00 session 0x55f5147c7500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 145055744 unmapped: 64675840 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 331 handle_osd_map epochs [331,332], i have 332, src has [1,332]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 332 ms_handle_reset con 0x55f514a2a000 session 0x55f519468a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 145072128 unmapped: 64659456 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 332 ms_handle_reset con 0x55f515156400 session 0x55f515269500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 332 ms_handle_reset con 0x55f514c92800 session 0x55f514bfa540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 145096704 unmapped: 64634880 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 332 ms_handle_reset con 0x55f512247400 session 0x55f515352380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4193866 data_alloc: 218103808 data_used: 90738
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 332 ms_handle_reset con 0x55f512259800 session 0x55f512236700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e14f6000/0x0/0x4ffc00000, data 0x1847a3b3/0x18652000, compress 0x0/0x0/0x0, omap 0x5573c, meta 0x605a8c4), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 333 heartbeat osd_stat(store_statfs(0x4e14f6000/0x0/0x4ffc00000, data 0x1847a3b3/0x18652000, compress 0x0/0x0/0x0, omap 0x5573c, meta 0x605a8c4), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 333 ms_handle_reset con 0x55f514e3bc00 session 0x55f516c3fa40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 64618496 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 333 ms_handle_reset con 0x55f515853400 session 0x55f514d4f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 333 ms_handle_reset con 0x55f512247400 session 0x55f519468700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 334 ms_handle_reset con 0x55f514a2a000 session 0x55f5123601c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 145137664 unmapped: 64593920 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 145137664 unmapped: 64593920 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 64552960 heap: 209731584 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 178946048 unmapped: 39190528 heap: 218136576 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4492149 data_alloc: 218103808 data_used: 91920
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153911296 unmapped: 68427776 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.844234943s of 10.027378082s, submitted: 99
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 334 heartbeat osd_stat(store_statfs(0x4ddcf5000/0x0/0x4ffc00000, data 0x1bc7dbdd/0x1be57000, compress 0x0/0x0/0x0, omap 0x55f00, meta 0x605a100), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 149970944 unmapped: 72368128 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 handle_osd_map epochs [335,335], i have 335, src has [1,335]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f514c92800 session 0x55f514719dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 146161664 unmapped: 76177408 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 147685376 unmapped: 74653696 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f514e3bc00 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f5122b7800 session 0x55f514719dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160464896 unmapped: 61874176 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5387263 data_alloc: 218103808 data_used: 92505
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f514c92800 session 0x55f515352700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f514a2a000 session 0x55f5122dafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f512247400 session 0x55f519468700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156532736 unmapped: 65806336 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f514e3bc00 session 0x55f515352380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f51584f400 session 0x55f51957a000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f512247400 session 0x55f514d4f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f514a2a000 session 0x55f51957b500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158048256 unmapped: 64290816 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 heartbeat osd_stat(store_statfs(0x4cecef000/0x0/0x4ffc00000, data 0x2ac7fc03/0x2ae5d000, compress 0x0/0x0/0x0, omap 0x561de, meta 0x6059e22), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 154034176 unmapped: 68304896 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 ms_handle_reset con 0x55f514c92800 session 0x55f51487c8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158949376 unmapped: 63389696 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 154853376 unmapped: 67485696 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 heartbeat osd_stat(store_statfs(0x4cb8f0000/0x0/0x4ffc00000, data 0x2e07fbf3/0x2e25c000, compress 0x0/0x0/0x0, omap 0x56377, meta 0x6059c89), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6194414 data_alloc: 218103808 data_used: 7747945
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 71680000 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.889702559s of 10.005309105s, submitted: 139
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 154935296 unmapped: 67403776 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 150831104 unmapped: 71507968 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 336 heartbeat osd_stat(store_statfs(0x4c98eb000/0x0/0x4ffc00000, data 0x3008183a/0x3025f000, compress 0x0/0x0/0x0, omap 0x5645c, meta 0x6059ba4), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 336 ms_handle_reset con 0x55f512259800 session 0x55f5130e7dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 336 ms_handle_reset con 0x55f5124c3800 session 0x55f5123601c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 150962176 unmapped: 71376896 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 150962176 unmapped: 71376896 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 336 handle_osd_map epochs [336,337], i have 337, src has [1,337]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6367136 data_alloc: 218103808 data_used: 7747945
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 337 heartbeat osd_stat(store_statfs(0x4c8ce8000/0x0/0x4ffc00000, data 0x30c8306c/0x30e62000, compress 0x0/0x0/0x0, omap 0x569b2, meta 0x605964e), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 337 ms_handle_reset con 0x55f5124c3800 session 0x55f515268c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151019520 unmapped: 71319552 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151019520 unmapped: 71319552 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 338 ms_handle_reset con 0x55f512247400 session 0x55f511eab6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 338 ms_handle_reset con 0x55f514a2a000 session 0x55f514bfae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 338 ms_handle_reset con 0x55f512259800 session 0x55f512236fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 339 ms_handle_reset con 0x55f514c92800 session 0x55f51475b6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 339 ms_handle_reset con 0x55f514c92800 session 0x55f519468540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 68984832 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 ms_handle_reset con 0x55f512247400 session 0x55f515032700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155426816 unmapped: 66912256 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 ms_handle_reset con 0x55f512259800 session 0x55f51318e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 ms_handle_reset con 0x55f5124c3800 session 0x55f511eaac40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155451392 unmapped: 66887680 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6449732 data_alloc: 218103808 data_used: 7946364
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 heartbeat osd_stat(store_statfs(0x4c8696000/0x0/0x4ffc00000, data 0x3165459f/0x314b6000, compress 0x0/0x0/0x0, omap 0x57644, meta 0x60589bc), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155451392 unmapped: 66887680 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 ms_handle_reset con 0x55f514a2a000 session 0x55f5147c7180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 ms_handle_reset con 0x55f514a2a000 session 0x55f516c3fa40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 ms_handle_reset con 0x55f512247400 session 0x55f51475aa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.768630981s of 10.004258156s, submitted: 165
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155475968 unmapped: 66863104 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 341 heartbeat osd_stat(store_statfs(0x4c868f000/0x0/0x4ffc00000, data 0x316564a1/0x314bb000, compress 0x0/0x0/0x0, omap 0x57957, meta 0x60586a9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f512259800 session 0x55f51226f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f5124c3800 session 0x55f519483dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155484160 unmapped: 66854912 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 heartbeat osd_stat(store_statfs(0x4c868c000/0x0/0x4ffc00000, data 0x316580b0/0x314be000, compress 0x0/0x0/0x0, omap 0x57f3e, meta 0x60580c2), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155484160 unmapped: 66854912 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f514c92800 session 0x55f512c59880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155287552 unmapped: 67051520 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f515164400 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f512247400 session 0x55f515353c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f514c92800 session 0x55f5143508c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6458490 data_alloc: 218103808 data_used: 7950511
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f513044800 session 0x55f516c3f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f513044000 session 0x55f51957b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f512247400 session 0x55f51226e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 ms_handle_reset con 0x55f513044000 session 0x55f514bfb6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155394048 unmapped: 66945024 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 66928640 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 343 ms_handle_reset con 0x55f514e3bc00 session 0x55f51957a1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 343 ms_handle_reset con 0x55f515290000 session 0x55f512c64fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 343 ms_handle_reset con 0x55f514c92800 session 0x55f515352fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 343 ms_handle_reset con 0x55f515856800 session 0x55f511f55dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 344 ms_handle_reset con 0x55f513044800 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155344896 unmapped: 66994176 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 344 ms_handle_reset con 0x55f515292c00 session 0x55f519482fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 344 heartbeat osd_stat(store_statfs(0x4c78a5000/0x0/0x4ffc00000, data 0x3243c85c/0x322a7000, compress 0x0/0x0/0x0, omap 0x5882b, meta 0x60577d5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 345 ms_handle_reset con 0x55f511a04400 session 0x55f511e948c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 345 ms_handle_reset con 0x55f512247400 session 0x55f512279dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155336704 unmapped: 67002368 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 345 ms_handle_reset con 0x55f513044800 session 0x55f514ed5500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 345 ms_handle_reset con 0x55f511a04400 session 0x55f515344000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155344896 unmapped: 66994176 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6595361 data_alloc: 218103808 data_used: 7951127
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 345 ms_handle_reset con 0x55f515292c00 session 0x55f51957a000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 345 ms_handle_reset con 0x55f515161000 session 0x55f515352000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f51515e000 session 0x55f514bfb500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f5125d7000 session 0x55f511f54fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155574272 unmapped: 66764800 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f511a04400 session 0x55f511eaa700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f515856800 session 0x55f5152c4fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f513044800 session 0x55f512c64e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f515161000 session 0x55f515268c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.857038498s of 10.003337860s, submitted: 253
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f511a04400 session 0x55f516d28c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151789568 unmapped: 70549504 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 heartbeat osd_stat(store_statfs(0x4c87ac000/0x0/0x4ffc00000, data 0x311b69ad/0x3139d000, compress 0x0/0x0/0x0, omap 0x590a1, meta 0x6056f5f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 ms_handle_reset con 0x55f5125d7000 session 0x55f515268fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151789568 unmapped: 70549504 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151789568 unmapped: 70549504 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 347 ms_handle_reset con 0x55f513044800 session 0x55f514718c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 347 ms_handle_reset con 0x55f515856800 session 0x55f515353a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f515292c00 session 0x55f5194836c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151158784 unmapped: 71180288 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6389448 data_alloc: 218103808 data_used: 109884
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151158784 unmapped: 71180288 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f511a04400 session 0x55f511eaafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f5125d7000 session 0x55f511f55340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151166976 unmapped: 71172096 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 heartbeat osd_stat(store_statfs(0x4c8b2b000/0x0/0x4ffc00000, data 0x30e3825d/0x3101f000, compress 0x0/0x0/0x0, omap 0x5a628, meta 0x60559d8), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151166976 unmapped: 71172096 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153370624 unmapped: 68968448 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f513044800 session 0x55f516d28a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f515856800 session 0x55f512c64380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f51530f000 session 0x55f51475bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f511a04400 session 0x55f514bfa8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f5125d7000 session 0x55f516c3fa40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151240704 unmapped: 71098368 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6411841 data_alloc: 218103808 data_used: 109884
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151240704 unmapped: 71098368 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 ms_handle_reset con 0x55f513044800 session 0x55f5147188c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.687541008s of 10.007529259s, submitted: 90
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 349 ms_handle_reset con 0x55f515856800 session 0x55f515268e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151265280 unmapped: 71073792 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151396352 unmapped: 70942720 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 349 ms_handle_reset con 0x55f515336400 session 0x55f5147196c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 349 heartbeat osd_stat(store_statfs(0x4c8899000/0x0/0x4ffc00000, data 0x310c5eaa/0x312b1000, compress 0x0/0x0/0x0, omap 0x5ab18, meta 0x60554e8), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151527424 unmapped: 70811648 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 349 heartbeat osd_stat(store_statfs(0x4c8899000/0x0/0x4ffc00000, data 0x310c5eaa/0x312b1000, compress 0x0/0x0/0x0, omap 0x5aba2, meta 0x605545e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 350 ms_handle_reset con 0x55f511a04400 session 0x55f512c65340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151543808 unmapped: 70795264 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6436888 data_alloc: 218103808 data_used: 2784572
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 350 ms_handle_reset con 0x55f5125d7000 session 0x55f5152c5880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151543808 unmapped: 70795264 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 ms_handle_reset con 0x55f513044800 session 0x55f51487c540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151568384 unmapped: 70770688 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 ms_handle_reset con 0x55f515856800 session 0x55f515353880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 ms_handle_reset con 0x55f5122b6800 session 0x55f514a996c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151707648 unmapped: 70631424 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 ms_handle_reset con 0x55f511a04400 session 0x55f514bfa8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 ms_handle_reset con 0x55f5125d7000 session 0x55f519468a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151732224 unmapped: 70606848 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 70598656 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6439065 data_alloc: 218103808 data_used: 2784572
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 heartbeat osd_stat(store_statfs(0x4c8894000/0x0/0x4ffc00000, data 0x310c9682/0x312b6000, compress 0x0/0x0/0x0, omap 0x5b316, meta 0x6054cea), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 70598656 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 heartbeat osd_stat(store_statfs(0x4c8894000/0x0/0x4ffc00000, data 0x310c9682/0x312b6000, compress 0x0/0x0/0x0, omap 0x5b316, meta 0x6054cea), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 151748608 unmapped: 70590464 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.227433205s of 11.295578957s, submitted: 65
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152707072 unmapped: 69632000 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 ms_handle_reset con 0x55f513044800 session 0x55f5130e6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153878528 unmapped: 68460544 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155361280 unmapped: 66977792 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6512315 data_alloc: 218103808 data_used: 2837820
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 heartbeat osd_stat(store_statfs(0x4c85a7000/0x0/0x4ffc00000, data 0x31781682/0x3159d000, compress 0x0/0x0/0x0, omap 0x5b934, meta 0x60546cc), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155361280 unmapped: 66977792 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 ms_handle_reset con 0x55f513045000 session 0x55f5152688c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 351 handle_osd_map epochs [351,352], i have 352, src has [1,352]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 352 ms_handle_reset con 0x55f514a2e400 session 0x55f51318e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155131904 unmapped: 67207168 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 352 ms_handle_reset con 0x55f515161000 session 0x55f511e948c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 353 ms_handle_reset con 0x55f514a2e400 session 0x55f515352fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 353 ms_handle_reset con 0x55f515856800 session 0x55f514719a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 67190784 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 67190784 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 353 heartbeat osd_stat(store_statfs(0x4c85a3000/0x0/0x4ffc00000, data 0x317853a4/0x315a5000, compress 0x0/0x0/0x0, omap 0x5bfdd, meta 0x6054023), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 354 ms_handle_reset con 0x55f511a04400 session 0x55f5152696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 354 ms_handle_reset con 0x55f5125d7000 session 0x55f5122376c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155213824 unmapped: 67125248 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6519031 data_alloc: 218103808 data_used: 2842501
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 354 heartbeat osd_stat(store_statfs(0x4c8581000/0x0/0x4ffc00000, data 0x317a8ae7/0x315c8000, compress 0x0/0x0/0x0, omap 0x5c0c6, meta 0x6053f3a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155500544 unmapped: 66838528 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 ms_handle_reset con 0x55f511a04400 session 0x55f514bfbc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 ms_handle_reset con 0x55f514a2e400 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155516928 unmapped: 66822144 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 155525120 unmapped: 66813952 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 heartbeat osd_stat(store_statfs(0x4c857f000/0x0/0x4ffc00000, data 0x317aa74a/0x315cb000, compress 0x0/0x0/0x0, omap 0x5c239, meta 0x6053dc7), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.331295013s of 10.714052200s, submitted: 131
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 ms_handle_reset con 0x55f51528c400 session 0x55f515268a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 ms_handle_reset con 0x55f514a31800 session 0x55f5143516c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 ms_handle_reset con 0x55f515161000 session 0x55f515352540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152682496 unmapped: 69656576 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152682496 unmapped: 69656576 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6486837 data_alloc: 218103808 data_used: 167813
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152682496 unmapped: 69656576 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 heartbeat osd_stat(store_statfs(0x4c880b000/0x0/0x4ffc00000, data 0x3151e6c5/0x3133d000, compress 0x0/0x0/0x0, omap 0x5cb82, meta 0x605347e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152682496 unmapped: 69656576 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 152682496 unmapped: 69656576 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 ms_handle_reset con 0x55f511a04400 session 0x55f511eaafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 69197824 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 356 ms_handle_reset con 0x55f514a31800 session 0x55f512c64380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 357 ms_handle_reset con 0x55f51528c400 session 0x55f519483a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 63455232 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6587650 data_alloc: 218103808 data_used: 171776
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 357 ms_handle_reset con 0x55f514a2e400 session 0x55f515353dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 357 ms_handle_reset con 0x55f513044800 session 0x55f512279880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 357 heartbeat osd_stat(store_statfs(0x4c80ad000/0x0/0x4ffc00000, data 0x3209cf1b/0x31a9b000, compress 0x0/0x0/0x0, omap 0x5da77, meta 0x6052589), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 358 heartbeat osd_stat(store_statfs(0x4c80ad000/0x0/0x4ffc00000, data 0x3209cf1b/0x31a9b000, compress 0x0/0x0/0x0, omap 0x5da77, meta 0x6052589), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158130176 unmapped: 64208896 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 358 ms_handle_reset con 0x55f514a2e400 session 0x55f516adddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 359 ms_handle_reset con 0x55f51528c400 session 0x55f5143508c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 359 ms_handle_reset con 0x55f514a31800 session 0x55f5143c9a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 359 ms_handle_reset con 0x55f515856800 session 0x55f519483c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157089792 unmapped: 65249280 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 359 ms_handle_reset con 0x55f511a04400 session 0x55f5147c7180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 359 heartbeat osd_stat(store_statfs(0x4c80a4000/0x0/0x4ffc00000, data 0x320a0880/0x31aa4000, compress 0x0/0x0/0x0, omap 0x5e373, meta 0x6051c8d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157097984 unmapped: 65241088 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 359 ms_handle_reset con 0x55f511a04400 session 0x55f515269dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157097984 unmapped: 65241088 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.441680908s of 10.970262527s, submitted: 207
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 360 ms_handle_reset con 0x55f514a2e400 session 0x55f5131348c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157138944 unmapped: 65200128 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6451540 data_alloc: 218103808 data_used: 175872
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 360 heartbeat osd_stat(store_statfs(0x4c9400000/0x0/0x4ffc00000, data 0x30922475/0x30749000, compress 0x0/0x0/0x0, omap 0x5ed7f, meta 0x6051281), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 360 ms_handle_reset con 0x55f51528c400 session 0x55f516d28a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 360 ms_handle_reset con 0x55f515856800 session 0x55f511eab500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158187520 unmapped: 64151552 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 361 ms_handle_reset con 0x55f513045000 session 0x55f515344540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158195712 unmapped: 64143360 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 ms_handle_reset con 0x55f515162800 session 0x55f519483c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 ms_handle_reset con 0x55f511a04400 session 0x55f514718380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 ms_handle_reset con 0x55f514a2e400 session 0x55f51475b6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 ms_handle_reset con 0x55f514a31800 session 0x55f512236c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 heartbeat osd_stat(store_statfs(0x4c93fc000/0x0/0x4ffc00000, data 0x30924106/0x3074e000, compress 0x0/0x0/0x0, omap 0x5eef4, meta 0x605110c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158228480 unmapped: 64110592 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 ms_handle_reset con 0x55f513045800 session 0x55f512279880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 ms_handle_reset con 0x55f515856800 session 0x55f516adddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158253056 unmapped: 64086016 heap: 222339072 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 363 ms_handle_reset con 0x55f51528c400 session 0x55f519483a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 363 ms_handle_reset con 0x55f514a2e400 session 0x55f5152c4380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208691200 unmapped: 34643968 heap: 243335168 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6749119 data_alloc: 218103808 data_used: 181495
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 363 ms_handle_reset con 0x55f514a31800 session 0x55f51957b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 171098112 unmapped: 76439552 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 363 ms_handle_reset con 0x55f515162800 session 0x55f519468e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 364 ms_handle_reset con 0x55f514a2e400 session 0x55f519482e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 88784896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 364 ms_handle_reset con 0x55f514a31800 session 0x55f512236540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 175915008 unmapped: 71622656 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 364 heartbeat osd_stat(store_statfs(0x4bf7f0000/0x0/0x4ffc00000, data 0x3a529b83/0x3a35c000, compress 0x0/0x0/0x0, omap 0x5fb93, meta 0x605046d), peers [0,2] op hist [0,0,0,0,0,0,0,0,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 364 ms_handle_reset con 0x55f51530f400 session 0x55f514ed5340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 364 ms_handle_reset con 0x55f51528c400 session 0x55f512c58380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 171999232 unmapped: 75538432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.033923149s of 10.071982384s, submitted: 139
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 364 handle_osd_map epochs [364,365], i have 365, src has [1,365]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 70967296 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7639625 data_alloc: 218103808 data_used: 181526
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 365 ms_handle_reset con 0x55f51584ec00 session 0x55f512435dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164364288 unmapped: 83173376 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 173162496 unmapped: 74375168 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 366 ms_handle_reset con 0x55f5124c2400 session 0x55f516d29180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 366 ms_handle_reset con 0x55f515856800 session 0x55f512c64e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160718848 unmapped: 86818816 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 367 heartbeat osd_stat(store_statfs(0x4b63e8000/0x0/0x4ffc00000, data 0x4392d413/0x43764000, compress 0x0/0x0/0x0, omap 0x6021b, meta 0x604fde5), peers [0,2] op hist [0,0,0,0,1,0,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 173645824 unmapped: 73891840 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 367 ms_handle_reset con 0x55f514a2e400 session 0x55f5153521c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 367 ms_handle_reset con 0x55f514a31800 session 0x55f514d4f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161308672 unmapped: 86228992 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8469204 data_alloc: 218103808 data_used: 181527
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 367 ms_handle_reset con 0x55f514a32800 session 0x55f512237a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 367 heartbeat osd_stat(store_statfs(0x4b23e5000/0x0/0x4ffc00000, data 0x4792eff9/0x47765000, compress 0x0/0x0/0x0, omap 0x60392, meta 0x604fc6e), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,3])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 169926656 unmapped: 77611008 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 367 ms_handle_reset con 0x55f513045800 session 0x55f512c1b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 368 ms_handle_reset con 0x55f514a2e400 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 368 ms_handle_reset con 0x55f514a31800 session 0x55f516d29880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 368 ms_handle_reset con 0x55f511a04400 session 0x55f512279dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 368 ms_handle_reset con 0x55f5124c2400 session 0x55f516d28000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 368 ms_handle_reset con 0x55f511a04400 session 0x55f516add880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 368 ms_handle_reset con 0x55f513045800 session 0x55f514a70000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161562624 unmapped: 85975040 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 369 ms_handle_reset con 0x55f514a2e400 session 0x55f515345180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 369 ms_handle_reset con 0x55f514a32800 session 0x55f514bfbc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 369 ms_handle_reset con 0x55f514a31800 session 0x55f5152696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 369 ms_handle_reset con 0x55f514a31800 session 0x55f519468fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161095680 unmapped: 86441984 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 370 ms_handle_reset con 0x55f513045800 session 0x55f519468000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 370 heartbeat osd_stat(store_statfs(0x4c63e0000/0x0/0x4ffc00000, data 0x30932362/0x30765000, compress 0x0/0x0/0x0, omap 0x607fc, meta 0x604f804), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 370 ms_handle_reset con 0x55f511a04400 session 0x55f51957b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 370 ms_handle_reset con 0x55f514a2e400 session 0x55f516d28a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 370 ms_handle_reset con 0x55f514a32800 session 0x55f514bfa000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161153024 unmapped: 86384640 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.807096004s of 10.053786278s, submitted: 296
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 371 ms_handle_reset con 0x55f511a04400 session 0x55f51487c380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161153024 unmapped: 86384640 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 371 ms_handle_reset con 0x55f514a2e400 session 0x55f515352fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6604804 data_alloc: 218103808 data_used: 182536
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 371 ms_handle_reset con 0x55f514a31800 session 0x55f515353c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 372 ms_handle_reset con 0x55f513045800 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 372 ms_handle_reset con 0x55f514a32800 session 0x55f514a98a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 372 heartbeat osd_stat(store_statfs(0x4c93e3000/0x0/0x4ffc00000, data 0x30935b43/0x30767000, compress 0x0/0x0/0x0, omap 0x61409, meta 0x604ebf7), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161177600 unmapped: 86360064 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 373 ms_handle_reset con 0x55f513045800 session 0x55f51957b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161193984 unmapped: 86343680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 373 ms_handle_reset con 0x55f514a2e400 session 0x55f5147c68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 374 ms_handle_reset con 0x55f511a04400 session 0x55f51957a1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 374 ms_handle_reset con 0x55f514a31800 session 0x55f5153528c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161234944 unmapped: 86302720 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 374 ms_handle_reset con 0x55f515856800 session 0x55f512c64c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 86294528 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 86294528 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6509709 data_alloc: 218103808 data_used: 129901
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 376 ms_handle_reset con 0x55f511a04400 session 0x55f515352e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 376 ms_handle_reset con 0x55f514a2e400 session 0x55f515268fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 376 ms_handle_reset con 0x55f513045800 session 0x55f514ed5500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160022528 unmapped: 87515136 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 376 heartbeat osd_stat(store_statfs(0x4c9e90000/0x0/0x4ffc00000, data 0x2faafc8b/0x2fcb5000, compress 0x0/0x0/0x0, omap 0x62b70, meta 0x604d490), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 376 ms_handle_reset con 0x55f514a31800 session 0x55f519468540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 377 ms_handle_reset con 0x55f51528c400 session 0x55f515269a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160047104 unmapped: 87490560 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 377 ms_handle_reset con 0x55f511a04400 session 0x55f512c1b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 378 ms_handle_reset con 0x55f513045800 session 0x55f51487c540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 378 heartbeat osd_stat(store_statfs(0x4e207e000/0x0/0x4ffc00000, data 0x178c2478/0x17aca000, compress 0x0/0x0/0x0, omap 0x63405, meta 0x604cbfb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 160088064 unmapped: 87449600 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 378 ms_handle_reset con 0x55f514a2e400 session 0x55f511e948c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 378 handle_osd_map epochs [378,379], i have 379, src has [1,379]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 379 ms_handle_reset con 0x55f514a31800 session 0x55f516adcc40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 88530944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 88530944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2730746 data_alloc: 218103808 data_used: 133883
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 379 heartbeat osd_stat(store_statfs(0x4f8c7c000/0x0/0x4ffc00000, data 0xcc5cdc/0xece000, compress 0x0/0x0/0x0, omap 0x63e02, meta 0x604c1fe), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 88530944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948608398s of 12.349917412s, submitted: 443
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 88530944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 88530944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 88530944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 380 ms_handle_reset con 0x55f51528c400 session 0x55f516d29180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 380 heartbeat osd_stat(store_statfs(0x4f8c79000/0x0/0x4ffc00000, data 0xcc7832/0xed1000, compress 0x0/0x0/0x0, omap 0x6419f, meta 0x604be61), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 380 ms_handle_reset con 0x55f514e3a000 session 0x55f512c64e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 380 ms_handle_reset con 0x55f51530f400 session 0x55f5143516c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158613504 unmapped: 88924160 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732928 data_alloc: 218103808 data_used: 134219
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158613504 unmapped: 88924160 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 380 ms_handle_reset con 0x55f513045800 session 0x55f5131348c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 381 ms_handle_reset con 0x55f514a31800 session 0x55f512236540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158613504 unmapped: 88924160 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 382 ms_handle_reset con 0x55f514a2e400 session 0x55f516d28000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 382 ms_handle_reset con 0x55f514a2e400 session 0x55f514a98a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158621696 unmapped: 88915968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 382 ms_handle_reset con 0x55f511a04400 session 0x55f514ed5340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158621696 unmapped: 88915968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157106176 unmapped: 90431488 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 383 ms_handle_reset con 0x55f513045800 session 0x55f514718c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2744530 data_alloc: 218103808 data_used: 134804
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 383 ms_handle_reset con 0x55f514a31800 session 0x55f5152c4fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 383 heartbeat osd_stat(store_statfs(0x4f8c6c000/0x0/0x4ffc00000, data 0xccd1b4/0xedc000, compress 0x0/0x0/0x0, omap 0x65216, meta 0x604adea), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 383 ms_handle_reset con 0x55f51530f400 session 0x55f51487c380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 384 ms_handle_reset con 0x55f514e3a000 session 0x55f511eaae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157130752 unmapped: 90406912 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 384 ms_handle_reset con 0x55f51530f400 session 0x55f511eaafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156590080 unmapped: 90947584 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 89546752 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 89546752 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.407316208s of 12.830599785s, submitted: 97
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 384 ms_handle_reset con 0x55f511a04400 session 0x55f519468540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 384 ms_handle_reset con 0x55f513045800 session 0x55f512360700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157999104 unmapped: 89538560 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2780429 data_alloc: 218103808 data_used: 5184761
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 384 ms_handle_reset con 0x55f514a2e400 session 0x55f519468fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 89833472 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 384 heartbeat osd_stat(store_statfs(0x4f8c6f000/0x0/0x4ffc00000, data 0xcce8c2/0xedd000, compress 0x0/0x0/0x0, omap 0x65af9, meta 0x604a507), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 89833472 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 ms_handle_reset con 0x55f511a04400 session 0x55f516c3efc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 ms_handle_reset con 0x55f514a31c00 session 0x55f5143c9a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157712384 unmapped: 89825280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157712384 unmapped: 89825280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 ms_handle_reset con 0x55f51515ec00 session 0x55f5152c5500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 heartbeat osd_stat(store_statfs(0x4f8c68000/0x0/0x4ffc00000, data 0xcd0426/0xee2000, compress 0x0/0x0/0x0, omap 0x6601d, meta 0x6049fe3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 ms_handle_reset con 0x55f515621c00 session 0x55f519483340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 89792512 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2787308 data_alloc: 218103808 data_used: 5185147
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 ms_handle_reset con 0x55f5165bd800 session 0x55f515033180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 ms_handle_reset con 0x55f514a31c00 session 0x55f5153456c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157769728 unmapped: 89767936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 386 ms_handle_reset con 0x55f511a04400 session 0x55f51475a380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 386 ms_handle_reset con 0x55f515621c00 session 0x55f512c59180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157671424 unmapped: 89866240 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 387 ms_handle_reset con 0x55f51515ec00 session 0x55f51475b6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f5142c2400 session 0x55f5143c9880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f515158400 session 0x55f511eabc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f514a2a800 session 0x55f514bfb880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157671424 unmapped: 89866240 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f514a31c00 session 0x55f5194681c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156426240 unmapped: 91111424 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x6c68a4/0x8e0000, compress 0x0/0x0/0x0, omap 0x66d61, meta 0x604929f), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f515621c00 session 0x55f519469180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.716032982s of 10.167787552s, submitted: 118
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f5169ab400 session 0x55f516d28a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f511a04400 session 0x55f51487c000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156442624 unmapped: 91095040 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 ms_handle_reset con 0x55f5125d7800 session 0x55f51487c8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2731569 data_alloc: 218103808 data_used: 135803
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 389 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 389 ms_handle_reset con 0x55f514a2a800 session 0x55f5152696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156467200 unmapped: 91070464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 ms_handle_reset con 0x55f514a31c00 session 0x55f51475b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 ms_handle_reset con 0x55f515158400 session 0x55f516c3e1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 ms_handle_reset con 0x55f51515ec00 session 0x55f5122dbc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156499968 unmapped: 91037696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 ms_handle_reset con 0x55f511a04400 session 0x55f512361880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 heartbeat osd_stat(store_statfs(0x4f9261000/0x0/0x4ffc00000, data 0x6ca609/0x8e7000, compress 0x0/0x0/0x0, omap 0x67c9d, meta 0x6048363), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 ms_handle_reset con 0x55f5125d7800 session 0x55f5194681c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 156499968 unmapped: 91037696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 ms_handle_reset con 0x55f514a31c00 session 0x55f5147c7500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 390 handle_osd_map epochs [390,391], i have 391, src has [1,391]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 391 ms_handle_reset con 0x55f514a2a800 session 0x55f516c3efc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 391 ms_handle_reset con 0x55f511a04400 session 0x55f5152c4000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157573120 unmapped: 89964544 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 391 ms_handle_reset con 0x55f5125d7800 session 0x55f514a988c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157581312 unmapped: 89956352 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2745096 data_alloc: 218103808 data_used: 136388
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 391 ms_handle_reset con 0x55f51515ec00 session 0x55f516add500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157597696 unmapped: 89939968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 391 heartbeat osd_stat(store_statfs(0x4f925f000/0x0/0x4ffc00000, data 0x6cbde9/0x8eb000, compress 0x0/0x0/0x0, omap 0x68510, meta 0x6047af0), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 391 ms_handle_reset con 0x55f515621c00 session 0x55f514bfbdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 392 ms_handle_reset con 0x55f514a8f000 session 0x55f51475a380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 392 ms_handle_reset con 0x55f515290c00 session 0x55f512c59dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 157761536 unmapped: 89776128 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 392 ms_handle_reset con 0x55f511a04400 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 393 ms_handle_reset con 0x55f515163000 session 0x55f51226e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 393 ms_handle_reset con 0x55f5125d7800 session 0x55f515268fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 393 ms_handle_reset con 0x55f514a31c00 session 0x55f519469180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158359552 unmapped: 89178112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158367744 unmapped: 89169920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.063652039s of 10.146533012s, submitted: 149
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 394 ms_handle_reset con 0x55f5125d7800 session 0x55f51957ba40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 394 ms_handle_reset con 0x55f511a04400 session 0x55f51487ddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 394 ms_handle_reset con 0x55f514a31c00 session 0x55f516d28a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 88064000 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 394 heartbeat osd_stat(store_statfs(0x4f9187000/0x0/0x4ffc00000, data 0x7a141f/0x9c3000, compress 0x0/0x0/0x0, omap 0x690fc, meta 0x6046f04), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2765583 data_alloc: 218103808 data_used: 136388
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 394 ms_handle_reset con 0x55f515163000 session 0x55f5130e6c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 88064000 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 159473664 unmapped: 88064000 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 395 heartbeat osd_stat(store_statfs(0x4f9182000/0x0/0x4ffc00000, data 0x7a2edb/0x9c6000, compress 0x0/0x0/0x0, omap 0x6933d, meta 0x6046cc3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 395 ms_handle_reset con 0x55f514a8f000 session 0x55f514a996c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158654464 unmapped: 88883200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 396 ms_handle_reset con 0x55f514a8f000 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158654464 unmapped: 88883200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 397 ms_handle_reset con 0x55f511a04400 session 0x55f51087bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 397 ms_handle_reset con 0x55f515290c00 session 0x55f519469c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158679040 unmapped: 88858624 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2781107 data_alloc: 218103808 data_used: 136502
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158679040 unmapped: 88858624 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f5125d7800 session 0x55f5142c1500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f514a31c00 session 0x55f511e94a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158687232 unmapped: 88850432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158687232 unmapped: 88850432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x7a8376/0x9cd000, compress 0x0/0x0/0x0, omap 0x69f1d, meta 0x60460e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f511a04400 session 0x55f514bfa380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f5125d7800 session 0x55f513134380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158687232 unmapped: 88850432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f514a8f000 session 0x55f511e8aa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x7a8386/0x9ce000, compress 0x0/0x0/0x0, omap 0x69fa7, meta 0x6046059), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x7a8386/0x9ce000, compress 0x0/0x0/0x0, omap 0x69fa7, meta 0x6046059), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f515290c00 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158687232 unmapped: 88850432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2782203 data_alloc: 218103808 data_used: 136388
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.238775253s of 10.485261917s, submitted: 128
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 heartbeat osd_stat(store_statfs(0x4f917e000/0x0/0x4ffc00000, data 0x7a8386/0x9ce000, compress 0x0/0x0/0x0, omap 0x69fa7, meta 0x6046059), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f515163000 session 0x55f516d29c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 ms_handle_reset con 0x55f5125d7800 session 0x55f51487d500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 88784896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 399 ms_handle_reset con 0x55f51515ec00 session 0x55f519483500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158760960 unmapped: 88776704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 400 ms_handle_reset con 0x55f515621c00 session 0x55f512c1b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 400 ms_handle_reset con 0x55f511a04400 session 0x55f5130cddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158760960 unmapped: 88776704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 401 ms_handle_reset con 0x55f515293c00 session 0x55f516c3f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 401 ms_handle_reset con 0x55f511a04400 session 0x55f516d28c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 88645632 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 401 ms_handle_reset con 0x55f5125d7800 session 0x55f512c58fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 88645632 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802701 data_alloc: 218103808 data_used: 775478
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f9172000/0x0/0x4ffc00000, data 0x7ad86d/0x9da000, compress 0x0/0x0/0x0, omap 0x6ab16, meta 0x60454ea), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 88645632 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f9172000/0x0/0x4ffc00000, data 0x7ad86d/0x9da000, compress 0x0/0x0/0x0, omap 0x6ab16, meta 0x60454ea), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 88637440 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 402 ms_handle_reset con 0x55f515621c00 session 0x55f516c3e1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 403 ms_handle_reset con 0x55f51515b800 session 0x55f51487cc40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 403 ms_handle_reset con 0x55f51515ec00 session 0x55f51487c000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 88637440 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 404 ms_handle_reset con 0x55f511a04400 session 0x55f514ed5880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 404 ms_handle_reset con 0x55f5125d7800 session 0x55f5123616c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 88637440 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f9166000/0x0/0x4ffc00000, data 0x7b2ba5/0x9e2000, compress 0x0/0x0/0x0, omap 0x6b41e, meta 0x6044be2), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f9166000/0x0/0x4ffc00000, data 0x7b2ba5/0x9e2000, compress 0x0/0x0/0x0, omap 0x6b41e, meta 0x6044be2), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 405 ms_handle_reset con 0x55f51515b800 session 0x55f515268a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158908416 unmapped: 88629248 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f9163000/0x0/0x4ffc00000, data 0x7b4796/0x9e3000, compress 0x0/0x0/0x0, omap 0x6b509, meta 0x6044af7), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2813533 data_alloc: 218103808 data_used: 786486
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.080980301s of 10.461904526s, submitted: 98
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 88694784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 406 ms_handle_reset con 0x55f515621c00 session 0x55f51087bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 88686592 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 406 heartbeat osd_stat(store_statfs(0x4f9164000/0x0/0x4ffc00000, data 0x7b6431/0x9e6000, compress 0x0/0x0/0x0, omap 0x6bd24, meta 0x60442dc), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 88662016 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 406 handle_osd_map epochs [406,407], i have 407, src has [1,407]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 407 ms_handle_reset con 0x55f514a2a400 session 0x55f5131348c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 88653824 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 407 ms_handle_reset con 0x55f511a04400 session 0x55f514ed5180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 407 ms_handle_reset con 0x55f5125d7800 session 0x55f5147c7500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 163381248 unmapped: 84156416 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2856593 data_alloc: 218103808 data_used: 1217722
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f8be5000/0x0/0x4ffc00000, data 0xd35042/0xf65000, compress 0x0/0x0/0x0, omap 0x6c07a, meta 0x6043f86), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164610048 unmapped: 82927616 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 409 ms_handle_reset con 0x55f514a2a400 session 0x55f516d28000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 409 ms_handle_reset con 0x55f51515b800 session 0x55f5122376c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 82845696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 82845696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 82903040 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f8a16000/0x0/0x4ffc00000, data 0xef3862/0x1122000, compress 0x0/0x0/0x0, omap 0x6ccac, meta 0x6043354), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164634624 unmapped: 82903040 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2872193 data_alloc: 218103808 data_used: 1254472
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164552704 unmapped: 82984960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.234982491s of 10.858542442s, submitted: 233
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 82845696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 409 ms_handle_reset con 0x55f515621c00 session 0x55f512361180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 410 ms_handle_reset con 0x55f511a04400 session 0x55f5130e7a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 82845696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164691968 unmapped: 82845696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 410 ms_handle_reset con 0x55f514a2a400 session 0x55f5152c5500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 82837504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2874243 data_alloc: 218103808 data_used: 1254472
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 410 heartbeat osd_stat(store_statfs(0x4f8a03000/0x0/0x4ffc00000, data 0xf17373/0x1149000, compress 0x0/0x0/0x0, omap 0x6cef3, meta 0x604310d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 410 heartbeat osd_stat(store_statfs(0x4f8a03000/0x0/0x4ffc00000, data 0xf17373/0x1149000, compress 0x0/0x0/0x0, omap 0x6cef3, meta 0x604310d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 410 handle_osd_map epochs [410,411], i have 411, src has [1,411]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 411 ms_handle_reset con 0x55f51515b800 session 0x55f516c3e000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 411 ms_handle_reset con 0x55f515851c00 session 0x55f516adcc40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164708352 unmapped: 82829312 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 412 ms_handle_reset con 0x55f51515a800 session 0x55f511eaba40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 412 ms_handle_reset con 0x55f513044000 session 0x55f515353180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 412 ms_handle_reset con 0x55f5125d7800 session 0x55f51487c380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164880384 unmapped: 82657280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164904960 unmapped: 82632704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 413 ms_handle_reset con 0x55f511a04400 session 0x55f511f44700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f514a2a400 session 0x55f5153521c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f51515a800 session 0x55f5130e6c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164937728 unmapped: 82599936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164937728 unmapped: 82599936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2902988 data_alloc: 218103808 data_used: 1255057
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f89ee000/0x0/0x4ffc00000, data 0x10d85fe/0x115a000, compress 0x0/0x0/0x0, omap 0x6dff1, meta 0x604200f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164937728 unmapped: 82599936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164937728 unmapped: 82599936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164937728 unmapped: 82599936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f511a04400 session 0x55f519483880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164937728 unmapped: 82599936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.489964485s of 12.699949265s, submitted: 88
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f5125d7800 session 0x55f519483c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f51515b800 session 0x55f51226e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f514a2a400 session 0x55f512236fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 82583552 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x1cd8661/0x1d5b000, compress 0x0/0x0/0x0, omap 0x6e18f, meta 0x6041e71), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018946 data_alloc: 218103808 data_used: 1255155
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f513044000 session 0x55f511eab6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 ms_handle_reset con 0x55f511a04400 session 0x55f5194681c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 82583552 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 82583552 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 82583552 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 82583552 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f75ec000/0x0/0x4ffc00000, data 0x24da16f/0x255e000, compress 0x0/0x0/0x0, omap 0x6e8e4, meta 0x604171c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 82583552 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013960 data_alloc: 218103808 data_used: 1255155
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f5125d7800 session 0x55f512c1b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165314560 unmapped: 82223104 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f75ec000/0x0/0x4ffc00000, data 0x24da16f/0x255e000, compress 0x0/0x0/0x0, omap 0x6e8e4, meta 0x604171c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165314560 unmapped: 82223104 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165142528 unmapped: 82395136 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165142528 unmapped: 82395136 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f75c4000/0x0/0x4ffc00000, data 0x250416f/0x2588000, compress 0x0/0x0/0x0, omap 0x6eb51, meta 0x60414af), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165142528 unmapped: 82395136 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019335 data_alloc: 218103808 data_used: 1457907
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165142528 unmapped: 82395136 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.894188881s of 12.311196327s, submitted: 41
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f51515b800 session 0x55f5130e68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f75c4000/0x0/0x4ffc00000, data 0x250416f/0x2588000, compress 0x0/0x0/0x0, omap 0x6eb51, meta 0x60414af), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 82092032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 82092032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 82092032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f759c000/0x0/0x4ffc00000, data 0x252d16f/0x25b0000, compress 0x0/0x0/0x0, omap 0x6ebcb, meta 0x6041435), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 82092032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026329 data_alloc: 218103808 data_used: 1457923
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 165445632 unmapped: 82092032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 167297024 unmapped: 80240640 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 167297024 unmapped: 80240640 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f759c000/0x0/0x4ffc00000, data 0x252d16f/0x25b0000, compress 0x0/0x0/0x0, omap 0x6ebcb, meta 0x6041435), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 167297024 unmapped: 80240640 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f759c000/0x0/0x4ffc00000, data 0x252d16f/0x25b0000, compress 0x0/0x0/0x0, omap 0x6ebcb, meta 0x6041435), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 167297024 unmapped: 80240640 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058201 data_alloc: 218103808 data_used: 6813955
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 172851200 unmapped: 74686464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 172883968 unmapped: 74653696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 172982272 unmapped: 74555392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x2aa316f/0x2b26000, compress 0x0/0x0/0x0, omap 0x6ebcb, meta 0x6041435), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 74522624 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.614696503s of 12.781006813s, submitted: 15
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f515163000 session 0x55f515268a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 78479360 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3112932 data_alloc: 218103808 data_used: 8993027
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f515339400 session 0x55f511eabdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f511a04400 session 0x55f516c3e000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 78331904 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 169598976 unmapped: 77938688 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f7026000/0x0/0x4ffc00000, data 0x2aa316f/0x2b26000, compress 0x0/0x0/0x0, omap 0x6ec7e, meta 0x6041382), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f5125d7800 session 0x55f516adc1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 184082432 unmapped: 63455232 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f51515b800 session 0x55f512236fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f515163000 session 0x55f519483880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 178405376 unmapped: 69132288 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f514a89400 session 0x55f512360540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 68665344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253730 data_alloc: 234881024 data_used: 9950979
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 68665344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f511a04400 session 0x55f512237880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177750016 unmapped: 69787648 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f5125d7800 session 0x55f5147c68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f5b0e000/0x0/0x4ffc00000, data 0x3fbb16f/0x403e000, compress 0x0/0x0/0x0, omap 0x6f0c3, meta 0x6040f3d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177750016 unmapped: 69787648 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f514a89400 session 0x55f511eaa700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177790976 unmapped: 69746688 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.364914894s of 10.626168251s, submitted: 162
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f51515b800 session 0x55f512c596c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3248930 data_alloc: 234881024 data_used: 9950979
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f515163000 session 0x55f51957a1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f5b0d000/0x0/0x4ffc00000, data 0x3fbb17f/0x403f000, compress 0x0/0x0/0x0, omap 0x6f595, meta 0x6040a6b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f5b0e000/0x0/0x4ffc00000, data 0x3fbb16f/0x403e000, compress 0x0/0x0/0x0, omap 0x6f5a3, meta 0x6040a5d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3248002 data_alloc: 234881024 data_used: 9950979
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 69730304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f513044000 session 0x55f512361180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f514a2a400 session 0x55f51487c000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f511a04400 session 0x55f519483c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177889280 unmapped: 69648384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177897472 unmapped: 69640192 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f5125d7800 session 0x55f511eaafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f514a89400 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f6094000/0x0/0x4ffc00000, data 0x3a6210d/0x3ab7000, compress 0x0/0x0/0x0, omap 0x6f9fe, meta 0x6040602), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177913856 unmapped: 69623808 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3221170 data_alloc: 234881024 data_used: 9645827
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 ms_handle_reset con 0x55f511a04400 session 0x55f516c3f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.208503723s of 10.681672096s, submitted: 69
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 177913856 unmapped: 69623808 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 ms_handle_reset con 0x55f5125d7800 session 0x55f512c64e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 ms_handle_reset con 0x55f514a8f000 session 0x55f5147c6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 ms_handle_reset con 0x55f515290c00 session 0x55f51957b500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 ms_handle_reset con 0x55f513044000 session 0x55f512279dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176709632 unmapped: 70828032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f60ae000/0x0/0x4ffc00000, data 0x3863cf2/0x3a9c000, compress 0x0/0x0/0x0, omap 0x701b2, meta 0x603fe4e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176709632 unmapped: 70828032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 ms_handle_reset con 0x55f511a04400 session 0x55f511eaac40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 ms_handle_reset con 0x55f5125d7800 session 0x55f514bfae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 71041024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 71041024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3121725 data_alloc: 218103808 data_used: 6054975
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 71041024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f68df000/0x0/0x4ffc00000, data 0x3033ce2/0x326b000, compress 0x0/0x0/0x0, omap 0x702ef, meta 0x603fd11), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 416 ms_handle_reset con 0x55f515851c00 session 0x55f514a70000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 417 ms_handle_reset con 0x55f511a05c00 session 0x55f511e948c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 71041024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f68dc000/0x0/0x4ffc00000, data 0x30357b8/0x326e000, compress 0x0/0x0/0x0, omap 0x70462, meta 0x603fb9e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 71041024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 ms_handle_reset con 0x55f514a8f000 session 0x55f514a71c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 ms_handle_reset con 0x55f513044000 session 0x55f514718700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f68fd000/0x0/0x4ffc00000, data 0x30133ff/0x324d000, compress 0x0/0x0/0x0, omap 0x708f9, meta 0x603f707), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 71032832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 71032832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3120081 data_alloc: 218103808 data_used: 5962815
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 71032832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f68fd000/0x0/0x4ffc00000, data 0x30133ff/0x324d000, compress 0x0/0x0/0x0, omap 0x708f9, meta 0x603f707), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 71032832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.102822304s of 11.950208664s, submitted: 101
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 ms_handle_reset con 0x55f511a04400 session 0x55f514a99180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 71032832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 ms_handle_reset con 0x55f5125d7800 session 0x55f512c58380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 71016448 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f68ff000/0x0/0x4ffc00000, data 0x30133ff/0x324d000, compress 0x0/0x0/0x0, omap 0x70c56, meta 0x603f3aa), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 419 ms_handle_reset con 0x55f511a05c00 session 0x55f5130e6c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 419 ms_handle_reset con 0x55f515851c00 session 0x55f512236700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 71016448 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3124412 data_alloc: 218103808 data_used: 5962815
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 419 ms_handle_reset con 0x55f511a04400 session 0x55f519483c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 71008256 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 419 ms_handle_reset con 0x55f5125d7800 session 0x55f5153521c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 419 ms_handle_reset con 0x55f511a05c00 session 0x55f511eaa700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 70991872 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f513044000 session 0x55f516c3f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f515290c00 session 0x55f51318f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 70991872 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f511a04400 session 0x55f511e8b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f68f5000/0x0/0x4ffc00000, data 0x3016b56/0x3255000, compress 0x0/0x0/0x0, omap 0x71a14, meta 0x603e5ec), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 70991872 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f5125d7800 session 0x55f515353c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f511a05c00 session 0x55f516adddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f514a2a400 session 0x55f516d28000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f51515b800 session 0x55f5143c81c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176676864 unmapped: 70860800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197292 data_alloc: 218103808 data_used: 5962929
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 ms_handle_reset con 0x55f511a04400 session 0x55f514bfa8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 ms_handle_reset con 0x55f5122b6000 session 0x55f5147c6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 ms_handle_reset con 0x55f513044000 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 70852608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 ms_handle_reset con 0x55f511a05c00 session 0x55f512361180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 ms_handle_reset con 0x55f5125d7800 session 0x55f5147c68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 70852608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 70852608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f5dbc000/0x0/0x4ffc00000, data 0x3b5072b/0x3d8e000, compress 0x0/0x0/0x0, omap 0x71fbe, meta 0x603e042), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 70852608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 70852608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198407 data_alloc: 218103808 data_used: 5962815
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 ms_handle_reset con 0x55f511a04400 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 70852608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.722827911s of 14.249926567s, submitted: 109
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 421 handle_osd_map epochs [421,422], i have 422, src has [1,422]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f511a05c00 session 0x55f511eaae00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176685056 unmapped: 70852608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f5122b6000 session 0x55f519469880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f513044000 session 0x55f51475b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 70696960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f5d94000/0x0/0x4ffc00000, data 0x3b76210/0x3db6000, compress 0x0/0x0/0x0, omap 0x726ba, meta 0x603d946), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176840704 unmapped: 70696960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f512247c00 session 0x55f5142c1500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 70672384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3206030 data_alloc: 218103808 data_used: 5963327
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 24K writes, 101K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 8538 syncs, 2.89 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 50K keys, 12K commit groups, 1.0 writes per commit group, ingest: 30.11 MB, 0.05 MB/s#012Interval WAL: 12K writes, 5265 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 179126272 unmapped: 68411392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185245696 unmapped: 62291968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f5d95000/0x0/0x4ffc00000, data 0x3b76233/0x3db7000, compress 0x0/0x0/0x0, omap 0x726ba, meta 0x603d946), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185245696 unmapped: 62291968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185245696 unmapped: 62291968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185245696 unmapped: 62291968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267598 data_alloc: 234881024 data_used: 14899718
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185245696 unmapped: 62291968 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f5d95000/0x0/0x4ffc00000, data 0x3b76233/0x3db7000, compress 0x0/0x0/0x0, omap 0x726ba, meta 0x603d946), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 62259200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 62259200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 62259200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f5d95000/0x0/0x4ffc00000, data 0x3b76233/0x3db7000, compress 0x0/0x0/0x0, omap 0x726ba, meta 0x603d946), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185278464 unmapped: 62259200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267598 data_alloc: 234881024 data_used: 14899718
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.254010201s of 14.274919510s, submitted: 18
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189120512 unmapped: 58417152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189243392 unmapped: 58294272 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190398464 unmapped: 57139200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190398464 unmapped: 57139200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f58ba000/0x0/0x4ffc00000, data 0x403a233/0x427b000, compress 0x0/0x0/0x0, omap 0x726ba, meta 0x603d946), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190398464 unmapped: 57139200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3336662 data_alloc: 234881024 data_used: 16004102
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190398464 unmapped: 57139200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f58ba000/0x0/0x4ffc00000, data 0x403a233/0x427b000, compress 0x0/0x0/0x0, omap 0x726ba, meta 0x603d946), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190398464 unmapped: 57139200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189726720 unmapped: 57810944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189726720 unmapped: 57810944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189726720 unmapped: 57810944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328166 data_alloc: 234881024 data_used: 16004102
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189726720 unmapped: 57810944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f58cf000/0x0/0x4ffc00000, data 0x403c233/0x427d000, compress 0x0/0x0/0x0, omap 0x726ba, meta 0x603d946), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.694367409s of 11.147062302s, submitted: 88
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189726720 unmapped: 57810944 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f511a04400 session 0x55f516c3e000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189751296 unmapped: 57786368 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f511a05c00 session 0x55f516adc8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327322 data_alloc: 234881024 data_used: 15983622
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f58cc000/0x0/0x4ffc00000, data 0x403f210/0x427f000, compress 0x0/0x0/0x0, omap 0x724ce, meta 0x603db32), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327322 data_alloc: 234881024 data_used: 15983622
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f58cc000/0x0/0x4ffc00000, data 0x403f210/0x427f000, compress 0x0/0x0/0x0, omap 0x724ce, meta 0x603db32), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 57778176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327322 data_alloc: 234881024 data_used: 15983622
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.398818970s of 14.036539078s, submitted: 33
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f5122b6000 session 0x55f5194681c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189915136 unmapped: 57622528 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f58a9000/0x0/0x4ffc00000, data 0x4063210/0x42a3000, compress 0x0/0x0/0x0, omap 0x724ce, meta 0x603db32), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190201856 unmapped: 57335808 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190439424 unmapped: 57098240 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f514a2a400 session 0x55f512279880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f514a2f000 session 0x55f51226e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190439424 unmapped: 57098240 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f514a2f000 session 0x55f511eaafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 58744832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3318522 data_alloc: 234881024 data_used: 16190947
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f58ce000/0x0/0x4ffc00000, data 0x403f201/0x427e000, compress 0x0/0x0/0x0, omap 0x7286e, meta 0x603d792), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 58744832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 58744832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 58744832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 58744832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f511a04400 session 0x55f5124356c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 58744832 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f511a05c00 session 0x55f5130e7180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060750 data_alloc: 234881024 data_used: 9894355
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185114624 unmapped: 62423040 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f81e7000/0x0/0x4ffc00000, data 0x17271f1/0x1965000, compress 0x0/0x0/0x0, omap 0x728f6, meta 0x603d70a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.767030716s of 10.863199234s, submitted: 26
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f81e7000/0x0/0x4ffc00000, data 0x17271f1/0x1965000, compress 0x0/0x0/0x0, omap 0x728f6, meta 0x603d70a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3076030 data_alloc: 234881024 data_used: 11053011
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f5122b6000 session 0x55f516d28c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x17271f1/0x1965000, compress 0x0/0x0/0x0, omap 0x728f6, meta 0x603d70a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f514a2a400 session 0x55f519482a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f81db000/0x0/0x4ffc00000, data 0x17331f1/0x1971000, compress 0x0/0x0/0x0, omap 0x7293a, meta 0x603d6c6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3076584 data_alloc: 234881024 data_used: 11049939
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 61341696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.454865456s of 10.597265244s, submitted: 33
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186204160 unmapped: 61333504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186220544 unmapped: 61317120 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 61284352 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f819a000/0x0/0x4ffc00000, data 0x1773254/0x19b2000, compress 0x0/0x0/0x0, omap 0x729c2, meta 0x603d63e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f511a04400 session 0x55f5124356c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187318272 unmapped: 60219392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3079867 data_alloc: 234881024 data_used: 11049939
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187318272 unmapped: 60219392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f819a000/0x0/0x4ffc00000, data 0x1773254/0x19b2000, compress 0x0/0x0/0x0, omap 0x729c2, meta 0x603d63e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187318272 unmapped: 60219392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f819a000/0x0/0x4ffc00000, data 0x1773254/0x19b2000, compress 0x0/0x0/0x0, omap 0x729c2, meta 0x603d63e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f8199000/0x0/0x4ffc00000, data 0x1774254/0x19b3000, compress 0x0/0x0/0x0, omap 0x729c2, meta 0x603d63e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080371 data_alloc: 234881024 data_used: 11054035
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f8199000/0x0/0x4ffc00000, data 0x1774254/0x19b3000, compress 0x0/0x0/0x0, omap 0x729c2, meta 0x603d63e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3080371 data_alloc: 234881024 data_used: 11054035
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f8199000/0x0/0x4ffc00000, data 0x1774254/0x19b3000, compress 0x0/0x0/0x0, omap 0x729c2, meta 0x603d63e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 60211200 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f511a05c00 session 0x55f519482700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.673460007s of 14.215552330s, submitted: 101
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f5122b6000 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195747840 unmapped: 51789824 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f514a2f000 session 0x55f511e8b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 heartbeat osd_stat(store_statfs(0x4f57e0000/0x0/0x4ffc00000, data 0x412d1f1/0x436b000, compress 0x0/0x0/0x0, omap 0x72d5d, meta 0x603d2a3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 ms_handle_reset con 0x55f515160400 session 0x55f512c59340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187375616 unmapped: 60162048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a04400 session 0x55f519483c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187383808 unmapped: 60153856 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f5122b6000 session 0x55f5152696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a05c00 session 0x55f516c3e1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187392000 unmapped: 60145664 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3313398 data_alloc: 234881024 data_used: 11889520
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f57db000/0x0/0x4ffc00000, data 0x412ee46/0x436f000, compress 0x0/0x0/0x0, omap 0x72fb3, meta 0x603d04d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187392000 unmapped: 60145664 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187392000 unmapped: 60145664 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3356590 data_alloc: 234881024 data_used: 11885424
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f57d1000/0x0/0x4ffc00000, data 0x4489e46/0x437b000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x603c907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f514a2f000 session 0x55f5167061c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f57d1000/0x0/0x4ffc00000, data 0x4489e46/0x437b000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x603c907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f515621000 session 0x55f51318f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f515621000 session 0x55f516d28000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a05c00 session 0x55f514a70540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a04400 session 0x55f514bfa8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.765850067s of 13.469795227s, submitted: 44
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f514a2f000 session 0x55f51957b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187424768 unmapped: 60112896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3356722 data_alloc: 234881024 data_used: 11885424
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f516532000 session 0x55f519468000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a04400 session 0x55f516c3efc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187367424 unmapped: 60170240 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 187367424 unmapped: 60170240 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f57ad000/0x0/0x4ffc00000, data 0x44ade46/0x439f000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x603c907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390538 data_alloc: 234881024 data_used: 17121136
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f57ad000/0x0/0x4ffc00000, data 0x44ade46/0x439f000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x603c907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f57ad000/0x0/0x4ffc00000, data 0x44ade46/0x439f000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x603c907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3390538 data_alloc: 234881024 data_used: 17121136
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188039168 unmapped: 59498496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f57ad000/0x0/0x4ffc00000, data 0x44ade46/0x439f000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x603c907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.482620239s of 12.495110512s, submitted: 2
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 59408384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198688768 unmapped: 48848896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 199532544 unmapped: 48005120 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f4252000/0x0/0x4ffc00000, data 0x4728e46/0x461a000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x71dc907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 199278592 unmapped: 48259072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3514635 data_alloc: 234881024 data_used: 20024688
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 49455104 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 49455104 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198082560 unmapped: 49455104 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f3841000/0x0/0x4ffc00000, data 0x5279e46/0x516b000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x71dc907), peers [0,2] op hist [0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198959104 unmapped: 48578560 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198959104 unmapped: 48578560 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511767 data_alloc: 234881024 data_used: 19983728
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198959104 unmapped: 48578560 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198959104 unmapped: 48578560 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.392421722s of 10.792387962s, submitted: 161
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 49381376 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f376e000/0x0/0x4ffc00000, data 0x534ce46/0x523e000, compress 0x0/0x0/0x0, omap 0x736f9, meta 0x71dc907), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f5122b6000 session 0x55f519468c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198336512 unmapped: 49201152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f515621000 session 0x55f5147c6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a05c00 session 0x55f512333dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f514a2f000 session 0x55f5142c1500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f514a2f000 session 0x55f51226e540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 49192960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3516115 data_alloc: 234881024 data_used: 20800880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 49192960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 49192960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f3784000/0x0/0x4ffc00000, data 0x536ee46/0x5228000, compress 0x0/0x0/0x0, omap 0x73bb9, meta 0x71dc447), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198344704 unmapped: 49192960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a04400 session 0x55f516707340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f511a05c00 session 0x55f515352e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 49160192 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 ms_handle_reset con 0x55f5122b6000 session 0x55f515344fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 ms_handle_reset con 0x55f515621000 session 0x55f511f61500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 49160192 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3480053 data_alloc: 234881024 data_used: 20309360
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f3ace000/0x0/0x4ffc00000, data 0x4fb3a2b/0x4edb000, compress 0x0/0x0/0x0, omap 0x73da7, meta 0x71dc259), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198377472 unmapped: 49160192 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 ms_handle_reset con 0x55f513044000 session 0x55f515353880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 ms_handle_reset con 0x55f5122b7400 session 0x55f512c1a700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 ms_handle_reset con 0x55f511a04400 session 0x55f5122361c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 49152000 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 49152000 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 49152000 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 ms_handle_reset con 0x55f511a05c00 session 0x55f51487ddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 198385664 unmapped: 49152000 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.723073959s of 12.381030083s, submitted: 74
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254167 data_alloc: 218103808 data_used: 6154510
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 ms_handle_reset con 0x55f5122b6000 session 0x55f511e8b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f4b1c000/0x0/0x4ffc00000, data 0x3c4fa2b/0x3e90000, compress 0x0/0x0/0x0, omap 0x743c7, meta 0x71dbc39), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 ms_handle_reset con 0x55f511a04400 session 0x55f5152c5340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f4b17000/0x0/0x4ffc00000, data 0x3c51501/0x3e93000, compress 0x0/0x0/0x0, omap 0x74535, meta 0x71dbacb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 ms_handle_reset con 0x55f511a05c00 session 0x55f516adc540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 ms_handle_reset con 0x55f5122b7400 session 0x55f512361500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 ms_handle_reset con 0x55f513044000 session 0x55f5147c68c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259423 data_alloc: 218103808 data_used: 6154510
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f4b18000/0x0/0x4ffc00000, data 0x3c51511/0x3e94000, compress 0x0/0x0/0x0, omap 0x74535, meta 0x71dbacb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260371 data_alloc: 218103808 data_used: 6282510
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f4b18000/0x0/0x4ffc00000, data 0x3c51511/0x3e94000, compress 0x0/0x0/0x0, omap 0x74535, meta 0x71dbacb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 heartbeat osd_stat(store_statfs(0x4f4b18000/0x0/0x4ffc00000, data 0x3c51511/0x3e94000, compress 0x0/0x0/0x0, omap 0x74535, meta 0x71dbacb), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191160320 unmapped: 56377344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.187745094s of 14.226877213s, submitted: 34
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 ms_handle_reset con 0x55f515160c00 session 0x55f512435500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191291392 unmapped: 56246272 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3261979 data_alloc: 218103808 data_used: 6282510
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191291392 unmapped: 56246272 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 425 handle_osd_map epochs [425,426], i have 426, src has [1,426]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a04400 session 0x55f515345a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f4b17000/0x0/0x4ffc00000, data 0x3c51521/0x3e95000, compress 0x0/0x0/0x0, omap 0x745bd, meta 0x71dba43), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191291392 unmapped: 56246272 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a05c00 session 0x55f516c3fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f5122b7400 session 0x55f512c58380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f513044000 session 0x55f516c3f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191299584 unmapped: 56238080 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f515164000 session 0x55f516706380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f4b13000/0x0/0x4ffc00000, data 0x3c53124/0x3e99000, compress 0x0/0x0/0x0, omap 0x74d61, meta 0x71db29f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a04400 session 0x55f516adda40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a05c00 session 0x55f512434540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f5122b7400 session 0x55f511f61dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190283776 unmapped: 57253888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f4b12000/0x0/0x4ffc00000, data 0x3c53186/0x3e9a000, compress 0x0/0x0/0x0, omap 0x74b9f, meta 0x71db461), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190283776 unmapped: 57253888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f513044000 session 0x55f512c7da40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3287343 data_alloc: 218103808 data_used: 8129294
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f514a88800 session 0x55f511f55500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190283776 unmapped: 57253888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a04400 session 0x55f516add180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a05c00 session 0x55f519469dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190218240 unmapped: 57319424 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f4b13000/0x0/0x4ffc00000, data 0x3c53124/0x3e99000, compress 0x0/0x0/0x0, omap 0x74caf, meta 0x71db351), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f5122b7400 session 0x55f514718c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190218240 unmapped: 57319424 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f513044000 session 0x55f516adc8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190308352 unmapped: 57229312 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f515161800 session 0x55f5130e7180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.333797455s of 10.416969299s, submitted: 49
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a04400 session 0x55f516707340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190308352 unmapped: 57229312 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285534 data_alloc: 218103808 data_used: 8126222
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 ms_handle_reset con 0x55f511a05c00 session 0x55f512c7cc40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190308352 unmapped: 57229312 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f4b14000/0x0/0x4ffc00000, data 0x3c53114/0x3e98000, compress 0x0/0x0/0x0, omap 0x74dbf, meta 0x71db241), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 ms_handle_reset con 0x55f5122b7400 session 0x55f516707dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190177280 unmapped: 57360384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 ms_handle_reset con 0x55f513044000 session 0x55f515344540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190177280 unmapped: 57360384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 ms_handle_reset con 0x55f515337400 session 0x55f5167076c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190177280 unmapped: 57360384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190259200 unmapped: 57278464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3286248 data_alloc: 218103808 data_used: 8126222
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190259200 unmapped: 57278464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190259200 unmapped: 57278464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 heartbeat osd_stat(store_statfs(0x4f4b12000/0x0/0x4ffc00000, data 0x3c54d4b/0x3e9a000, compress 0x0/0x0/0x0, omap 0x754de, meta 0x71dab22), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190259200 unmapped: 57278464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 ms_handle_reset con 0x55f511a04400 session 0x55f514351180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191315968 unmapped: 56221696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.440915108s of 10.357586861s, submitted: 35
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 heartbeat osd_stat(store_statfs(0x4f4b12000/0x0/0x4ffc00000, data 0x3c54d4b/0x3e9a000, compress 0x0/0x0/0x0, omap 0x757a6, meta 0x71da85a), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 427 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191315968 unmapped: 56221696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3292450 data_alloc: 218103808 data_used: 8097550
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f4b0d000/0x0/0x4ffc00000, data 0x3c5695a/0x3e9d000, compress 0x0/0x0/0x0, omap 0x7594d, meta 0x71da6b3), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f4b0d000/0x0/0x4ffc00000, data 0x3c5695a/0x3e9d000, compress 0x0/0x0/0x0, omap 0x7594d, meta 0x71da6b3), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1,0,0,1,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3292594 data_alloc: 218103808 data_used: 8097550
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f4b0d000/0x0/0x4ffc00000, data 0x3c5695a/0x3e9d000, compress 0x0/0x0/0x0, omap 0x7594d, meta 0x71da6b3), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191324160 unmapped: 56213504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f4b0d000/0x0/0x4ffc00000, data 0x3c5695a/0x3e9d000, compress 0x0/0x0/0x0, omap 0x7594d, meta 0x71da6b3), peers [0,2] op hist [0,0,0,0,0,0,1,0,0,0,0,0,0,1,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 9.491985321s
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 9.491985321s
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.492147446s, txc = 0x55f5170e0f00, txc bytes = 1501, txc ios = 1, txc cost = 671501, txc onodes = 1, DB updates = 4, DB bytes = 1337, cost max = 110262598 on 2026-01-29T17:26:12.361871+0000, txc max = 104 on 2026-01-29T16:51:50.108713+0000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 9.135297775s, txc = 0x55f519465500, txc bytes = 34895, txc ios = 1, txc cost = 704895, txc onodes = 1, DB updates = 6, DB bytes = 35233, cost max = 110262598 on 2026-01-29T17:26:12.361871+0000, txc max = 104 on 2026-01-29T16:51:50.108713+0000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 429 ms_handle_reset con 0x55f511a05c00 session 0x55f514a701c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191283200 unmapped: 56254464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299449 data_alloc: 234881024 data_used: 9309966
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 429 ms_handle_reset con 0x55f514a2f000 session 0x55f515345500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 0.540656626s of 10.367045403s, submitted: 19
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 429 ms_handle_reset con 0x55f514a34400 session 0x55f512434fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 429 ms_handle_reset con 0x55f515163400 session 0x55f516adc000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 429 ms_handle_reset con 0x55f5122b7400 session 0x55f51226e540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 429 ms_handle_reset con 0x55f513044000 session 0x55f516707880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191430656 unmapped: 56107008 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191430656 unmapped: 56107008 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 430 ms_handle_reset con 0x55f511a04400 session 0x55f5152696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f4b06000/0x0/0x4ffc00000, data 0x3c5a013/0x3ea2000, compress 0x0/0x0/0x0, omap 0x760c7, meta 0x71d9f39), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191455232 unmapped: 56082432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 430 ms_handle_reset con 0x55f514a2f000 session 0x55f5130e6700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 430 ms_handle_reset con 0x55f511a05c00 session 0x55f512435a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191455232 unmapped: 56082432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 191455232 unmapped: 56082432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301086 data_alloc: 234881024 data_used: 9309966
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f511a05c00 session 0x55f519468000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f511a04400 session 0x55f5130e7dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186474496 unmapped: 61063168 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f5122b7400 session 0x55f514a70fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f513044000 session 0x55f51226ee00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185745408 unmapped: 61792256 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f514a2f000 session 0x55f514a70e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f784e000/0x0/0x4ffc00000, data 0x711c68/0x95c000, compress 0x0/0x0/0x0, omap 0x76e6c, meta 0x71d9194), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185753600 unmapped: 61784064 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f511a04400 session 0x55f516707a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f511a05c00 session 0x55f516addc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185753600 unmapped: 61784064 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f5122b7400 session 0x55f51226e8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f513044000 session 0x55f514ed5340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185753600 unmapped: 61784064 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2970960 data_alloc: 218103808 data_used: 103182
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185753600 unmapped: 61784064 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.771727562s of 10.946453094s, submitted: 63
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f514a34400 session 0x55f515353500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 ms_handle_reset con 0x55f511a05c00 session 0x55f514a71a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 432 ms_handle_reset con 0x55f5122b7400 session 0x55f5152c5340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 432 ms_handle_reset con 0x55f511a04400 session 0x55f519468700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185794560 unmapped: 61743104 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 432 ms_handle_reset con 0x55f513044000 session 0x55f514a701c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 432 ms_handle_reset con 0x55f514503400 session 0x55f512c7ca80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185810944 unmapped: 61726720 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 432 ms_handle_reset con 0x55f511a04400 session 0x55f511eaa8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 ms_handle_reset con 0x55f515164800 session 0x55f5167076c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f804b000/0x0/0x4ffc00000, data 0x7138af/0x95f000, compress 0x0/0x0/0x0, omap 0x774b8, meta 0x71d8b48), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185835520 unmapped: 61702144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 ms_handle_reset con 0x55f511a05c00 session 0x55f516c3f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f8046000/0x0/0x4ffc00000, data 0x7154f6/0x962000, compress 0x0/0x0/0x0, omap 0x77625, meta 0x71d89db), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 ms_handle_reset con 0x55f5122b7400 session 0x55f511f44700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 61669376 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2973878 data_alloc: 218103808 data_used: 103182
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f804b000/0x0/0x4ffc00000, data 0x715494/0x961000, compress 0x0/0x0/0x0, omap 0x77aa9, meta 0x71d8557), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 61669376 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 ms_handle_reset con 0x55f513044000 session 0x55f5152c5a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 61669376 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185868288 unmapped: 61669376 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 434 ms_handle_reset con 0x55f511a04400 session 0x55f514351180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185884672 unmapped: 61652992 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 434 ms_handle_reset con 0x55f5122b7400 session 0x55f516c3efc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 434 ms_handle_reset con 0x55f511a05c00 session 0x55f512c65a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185884672 unmapped: 61652992 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987364 data_alloc: 218103808 data_used: 103280
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185884672 unmapped: 61652992 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f803e000/0x0/0x4ffc00000, data 0x718c4d/0x96a000, compress 0x0/0x0/0x0, omap 0x78dad, meta 0x71d7253), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.383441925s of 10.627645493s, submitted: 110
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f515164800 session 0x55f5130e6a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 61644800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f803e000/0x0/0x4ffc00000, data 0x718c4d/0x96a000, compress 0x0/0x0/0x0, omap 0x78dad, meta 0x71d7253), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f515337000 session 0x55f5130e7a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f516533c00 session 0x55f512435340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185901056 unmapped: 61636608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f511a04400 session 0x55f515345500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185917440 unmapped: 61620224 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f511a05c00 session 0x55f519469a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185933824 unmapped: 61603840 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992092 data_alloc: 218103808 data_used: 103394
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f5122b7400 session 0x55f515344540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f515164800 session 0x55f516707500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185835520 unmapped: 61702144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f511a04400 session 0x55f5153521c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 ms_handle_reset con 0x55f5122b7400 session 0x55f511e8b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185835520 unmapped: 61702144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 436 handle_osd_map epochs [436,437], i have 437, src has [1,437]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 437 ms_handle_reset con 0x55f516533c00 session 0x55f512435c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 437 ms_handle_reset con 0x55f511a05c00 session 0x55f5143508c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 61677568 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 437 heartbeat osd_stat(store_statfs(0x4f803e000/0x0/0x4ffc00000, data 0x71a850/0x96e000, compress 0x0/0x0/0x0, omap 0x790e3, meta 0x71d6f1d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 437 ms_handle_reset con 0x55f51584e000 session 0x55f5167076c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185860096 unmapped: 61677568 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 437 ms_handle_reset con 0x55f511a05c00 session 0x55f514a71880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 438 ms_handle_reset con 0x55f5122b7400 session 0x55f5124356c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 438 ms_handle_reset con 0x55f511a04400 session 0x55f5152c5a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185876480 unmapped: 61661184 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2995805 data_alloc: 218103808 data_used: 103296
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 438 ms_handle_reset con 0x55f516533c00 session 0x55f512c65a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 438 ms_handle_reset con 0x55f514ab0800 session 0x55f515345500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185876480 unmapped: 61661184 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f803a000/0x0/0x4ffc00000, data 0x71dffa/0x970000, compress 0x0/0x0/0x0, omap 0x79757, meta 0x71d68a9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.981169701s of 10.204960823s, submitted: 101
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 438 ms_handle_reset con 0x55f511a04400 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185884672 unmapped: 61652992 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185884672 unmapped: 61652992 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 439 ms_handle_reset con 0x55f511a05c00 session 0x55f5199e8540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 61644800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 61644800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005362 data_alloc: 218103808 data_used: 103198
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 ms_handle_reset con 0x55f516533c00 session 0x55f516adce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 ms_handle_reset con 0x55f5122b7400 session 0x55f51957b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 61644800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 ms_handle_reset con 0x55f514a2ac00 session 0x55f516707500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 ms_handle_reset con 0x55f511a04400 session 0x55f511eaaa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 heartbeat osd_stat(store_statfs(0x4f8030000/0x0/0x4ffc00000, data 0x72176d/0x978000, compress 0x0/0x0/0x0, omap 0x7a2d8, meta 0x71d5d28), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185892864 unmapped: 61644800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 ms_handle_reset con 0x55f511a05c00 session 0x55f512435340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 ms_handle_reset con 0x55f516533c00 session 0x55f51318e380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 440 handle_osd_map epochs [440,441], i have 441, src has [1,441]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 441 ms_handle_reset con 0x55f514ab0400 session 0x55f512c7ca80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185909248 unmapped: 61628416 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 441 ms_handle_reset con 0x55f5122b7400 session 0x55f51226fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 441 ms_handle_reset con 0x55f511a04400 session 0x55f514a98fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185925632 unmapped: 61612032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185925632 unmapped: 61612032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3006812 data_alloc: 218103808 data_used: 104039
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f802d000/0x0/0x4ffc00000, data 0x724e96/0x97d000, compress 0x0/0x0/0x0, omap 0x7b123, meta 0x71d4edd), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 443 ms_handle_reset con 0x55f511a05c00 session 0x55f5122da540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185974784 unmapped: 61562880 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 443 ms_handle_reset con 0x55f5122b7400 session 0x55f511f44a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 443 ms_handle_reset con 0x55f514ab0400 session 0x55f51226f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185974784 unmapped: 61562880 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 443 ms_handle_reset con 0x55f516533c00 session 0x55f51475bc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185974784 unmapped: 61562880 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.823822975s of 11.233555794s, submitted: 106
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 443 ms_handle_reset con 0x55f511a04400 session 0x55f5167068c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 61554688 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 444 ms_handle_reset con 0x55f511a05c00 session 0x55f514351dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 444 ms_handle_reset con 0x55f5122b7400 session 0x55f5167061c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019658 data_alloc: 218103808 data_used: 104994
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f8026000/0x0/0x4ffc00000, data 0x7285dd/0x984000, compress 0x0/0x0/0x0, omap 0x7bc2e, meta 0x71d43d2), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 444 ms_handle_reset con 0x55f514ab0400 session 0x55f512236380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3022376 data_alloc: 218103808 data_used: 105266
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 444 handle_osd_map epochs [444,445], i have 445, src has [1,445]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 445 ms_handle_reset con 0x55f516533c00 session 0x55f513135a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 61546496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f8020000/0x0/0x4ffc00000, data 0x72a2f9/0x98a000, compress 0x0/0x0/0x0, omap 0x7c276, meta 0x71d3d8a), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 445 ms_handle_reset con 0x55f511a04400 session 0x55f516c3efc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 445 ms_handle_reset con 0x55f511a05c00 session 0x55f514a71340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 61530112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.473680496s of 10.673299789s, submitted: 53
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 61530112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 446 ms_handle_reset con 0x55f5122b7400 session 0x55f512c581c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 61530112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 446 ms_handle_reset con 0x55f514ab0400 session 0x55f513135a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3035098 data_alloc: 218103808 data_used: 105282
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 61530112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f801b000/0x0/0x4ffc00000, data 0x72bf5e/0x98f000, compress 0x0/0x0/0x0, omap 0x7c3e4, meta 0x71d3c1c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 446 ms_handle_reset con 0x55f515156800 session 0x55f51226fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 61530112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 446 heartbeat osd_stat(store_statfs(0x4f801b000/0x0/0x4ffc00000, data 0x72bf5e/0x98f000, compress 0x0/0x0/0x0, omap 0x7c46e, meta 0x71d3b92), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 61530112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 447 ms_handle_reset con 0x55f511a04400 session 0x55f511eaaa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186015744 unmapped: 61521920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 447 ms_handle_reset con 0x55f511a05c00 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 61513728 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039468 data_alloc: 218103808 data_used: 105282
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 447 ms_handle_reset con 0x55f5122b7400 session 0x55f512c65a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 186023936 unmapped: 61513728 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 447 ms_handle_reset con 0x55f514ab0400 session 0x55f511e8b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f515854c00 session 0x55f516c3fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f511a05c00 session 0x55f5142c1340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f5122b7400 session 0x55f514bfa8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f514ab0400 session 0x55f51475b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f511a04400 session 0x55f516adce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185360384 unmapped: 62177280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 heartbeat osd_stat(store_statfs(0x4f8017000/0x0/0x4ffc00000, data 0x72db51/0x992000, compress 0x0/0x0/0x0, omap 0x7c552, meta 0x71d3aae), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f51515c800 session 0x55f511f44a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f511a04400 session 0x55f51226f880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f511a05c00 session 0x55f512360380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f5122b7400 session 0x55f516adc000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f514ab0400 session 0x55f5194688c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f514a32400 session 0x55f51318f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185327616 unmapped: 62210048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 ms_handle_reset con 0x55f511a04400 session 0x55f511e8b880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 449 ms_handle_reset con 0x55f511a05c00 session 0x55f516c3efc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185327616 unmapped: 62210048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 449 ms_handle_reset con 0x55f5122b7400 session 0x55f517420000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.499388695s of 10.716286659s, submitted: 61
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 449 ms_handle_reset con 0x55f514ab0400 session 0x55f51087b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185327616 unmapped: 62210048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3087107 data_alloc: 218103808 data_used: 105879
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185327616 unmapped: 62210048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 449 heartbeat osd_stat(store_statfs(0x4f78f6000/0x0/0x4ffc00000, data 0xe4f36d/0x10b4000, compress 0x0/0x0/0x0, omap 0x7d0ee, meta 0x71d2f12), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185327616 unmapped: 62210048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 449 ms_handle_reset con 0x55f515292c00 session 0x55f512237500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185335808 unmapped: 62201856 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185335808 unmapped: 62201856 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 449 handle_osd_map epochs [449,450], i have 450, src has [1,450]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 450 ms_handle_reset con 0x55f5122b7400 session 0x55f512435a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185352192 unmapped: 62185472 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130564 data_alloc: 218103808 data_used: 6442391
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 450 heartbeat osd_stat(store_statfs(0x4f78f3000/0x0/0x4ffc00000, data 0xe50e43/0x10b7000, compress 0x0/0x0/0x0, omap 0x7d16b, meta 0x71d2e95), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185352192 unmapped: 62185472 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 450 heartbeat osd_stat(store_statfs(0x4f78f3000/0x0/0x4ffc00000, data 0xe50e43/0x10b7000, compress 0x0/0x0/0x0, omap 0x7d16b, meta 0x71d2e95), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185352192 unmapped: 62185472 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 450 ms_handle_reset con 0x55f5169aac00 session 0x55f519468c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185360384 unmapped: 62177280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 451 ms_handle_reset con 0x55f514a8f400 session 0x55f51226f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 452 ms_handle_reset con 0x55f514a2cc00 session 0x55f5199e8000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185384960 unmapped: 62152704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 452 ms_handle_reset con 0x55f514ab0400 session 0x55f51487d500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185384960 unmapped: 62152704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140164 data_alloc: 218103808 data_used: 6442391
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.805171013s of 10.906763077s, submitted: 46
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 452 ms_handle_reset con 0x55f514a2cc00 session 0x55f514a70fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f78ec000/0x0/0x4ffc00000, data 0xe5474f/0x10c0000, compress 0x0/0x0/0x0, omap 0x7dcba, meta 0x71d2346), peers [0,2] op hist [0,0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185360384 unmapped: 62177280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 453 ms_handle_reset con 0x55f514a8f400 session 0x55f51226ee00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185360384 unmapped: 62177280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 454 ms_handle_reset con 0x55f5169aac00 session 0x55f5199e8540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 454 ms_handle_reset con 0x55f5122b7400 session 0x55f512c65dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 185360384 unmapped: 62177280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 455 ms_handle_reset con 0x55f515165800 session 0x55f514a981c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 455 ms_handle_reset con 0x55f512c3d800 session 0x55f512434fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 455 heartbeat osd_stat(store_statfs(0x4f78e1000/0x0/0x4ffc00000, data 0xe59b7c/0x10c9000, compress 0x0/0x0/0x0, omap 0x7e8f4, meta 0x71d170c), peers [0,2] op hist [0,0,0,0,0,3])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190693376 unmapped: 56844288 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192544768 unmapped: 54992896 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210698 data_alloc: 218103808 data_used: 7032215
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 455 ms_handle_reset con 0x55f514a2cc00 session 0x55f519469340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192782336 unmapped: 54755328 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 456 ms_handle_reset con 0x55f514a8f400 session 0x55f516adce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192782336 unmapped: 54755328 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 457 ms_handle_reset con 0x55f5169aac00 session 0x55f513135a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 457 ms_handle_reset con 0x55f5152ec800 session 0x55f511f44a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 457 ms_handle_reset con 0x55f5122b7400 session 0x55f515352fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 457 heartbeat osd_stat(store_statfs(0x4f5ecb000/0x0/0x4ffc00000, data 0x16c8953/0x193d000, compress 0x0/0x0/0x0, omap 0x7f177, meta 0x8370e89), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192815104 unmapped: 54722560 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 458 ms_handle_reset con 0x55f514a2cc00 session 0x55f512360380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 458 ms_handle_reset con 0x55f514a8f400 session 0x55f514a98fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 458 ms_handle_reset con 0x55f512c3d800 session 0x55f512c7da40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 458 ms_handle_reset con 0x55f5169aac00 session 0x55f5199e8c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192847872 unmapped: 54689792 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 459 ms_handle_reset con 0x55f5122b7400 session 0x55f512c7ce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192847872 unmapped: 54689792 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228631 data_alloc: 218103808 data_used: 7179159
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192847872 unmapped: 54689792 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.350404739s of 11.127349854s, submitted: 202
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 459 ms_handle_reset con 0x55f514a2cc00 session 0x55f514351180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192864256 unmapped: 54673408 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f5ec9000/0x0/0x4ffc00000, data 0x16cdcfa/0x1943000, compress 0x0/0x0/0x0, omap 0x7fa87, meta 0x8370579), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 459 handle_osd_map epochs [459,460], i have 460, src has [1,460]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 460 ms_handle_reset con 0x55f515855000 session 0x55f5130e7a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192872448 unmapped: 54665216 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 461 ms_handle_reset con 0x55f514a8f400 session 0x55f5124341c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 461 ms_handle_reset con 0x55f512c3d800 session 0x55f516706700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192880640 unmapped: 54657024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 461 ms_handle_reset con 0x55f5122b7400 session 0x55f5142c1500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192880640 unmapped: 54657024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238310 data_alloc: 218103808 data_used: 7179529
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192880640 unmapped: 54657024 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 462 ms_handle_reset con 0x55f512c3d800 session 0x55f51226e380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 462 ms_handle_reset con 0x55f514a2cc00 session 0x55f5123616c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192913408 unmapped: 54624256 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5ec2000/0x0/0x4ffc00000, data 0x16d1560/0x194a000, compress 0x0/0x0/0x0, omap 0x803a3, meta 0x836fc5d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192913408 unmapped: 54624256 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192913408 unmapped: 54624256 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 462 ms_handle_reset con 0x55f514a8f400 session 0x55f5167061c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192913408 unmapped: 54624256 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3242592 data_alloc: 218103808 data_used: 7179431
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5ebe000/0x0/0x4ffc00000, data 0x16d4171/0x194e000, compress 0x0/0x0/0x0, omap 0x80c82, meta 0x836f37e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 462 handle_osd_map epochs [463,463], i have 463, src has [1,463]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193978368 unmapped: 53559296 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 463 ms_handle_reset con 0x55f5152ed400 session 0x55f5130cddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.297039032s of 10.676193237s, submitted: 121
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 463 handle_osd_map epochs [463,464], i have 464, src has [1,464]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193978368 unmapped: 53559296 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 464 ms_handle_reset con 0x55f5122b7400 session 0x55f515352e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f5eb4000/0x0/0x4ffc00000, data 0x16d78aa/0x1954000, compress 0x0/0x0/0x0, omap 0x811e5, meta 0x836ee1b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 465 ms_handle_reset con 0x55f512c3d800 session 0x55f514bfaa80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 53493760 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 465 ms_handle_reset con 0x55f515855000 session 0x55f512435c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 194052096 unmapped: 53485568 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 ms_handle_reset con 0x55f514a2cc00 session 0x55f516d28000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 ms_handle_reset con 0x55f514a8f400 session 0x55f514bfafc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 54181888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3256076 data_alloc: 218103808 data_used: 7179431
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f5eaf000/0x0/0x4ffc00000, data 0x16db19a/0x195b000, compress 0x0/0x0/0x0, omap 0x81c7a, meta 0x836e386), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 ms_handle_reset con 0x55f5122b7400 session 0x55f519468000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 54181888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 54181888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 ms_handle_reset con 0x55f514a2cc00 session 0x55f514a71340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 ms_handle_reset con 0x55f515855000 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193363968 unmapped: 54173696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 heartbeat osd_stat(store_statfs(0x4f5eb0000/0x0/0x4ffc00000, data 0x16db18a/0x195a000, compress 0x0/0x0/0x0, omap 0x81fd3, meta 0x836e02d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 467 ms_handle_reset con 0x55f5124c3800 session 0x55f51957b180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 467 ms_handle_reset con 0x55f515851000 session 0x55f514bfa540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 467 ms_handle_reset con 0x55f512c3d800 session 0x55f512c65dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193363968 unmapped: 54173696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 468 ms_handle_reset con 0x55f5122b7400 session 0x55f512c58c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 468 ms_handle_reset con 0x55f5124c3800 session 0x55f512c65dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193363968 unmapped: 54173696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262008 data_alloc: 218103808 data_used: 7187721
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 469 ms_handle_reset con 0x55f514a2cc00 session 0x55f515352e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 54140928 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 54140928 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.248327255s of 10.565643311s, submitted: 117
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 470 ms_handle_reset con 0x55f514a2b800 session 0x55f512c7ca80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193413120 unmapped: 54124544 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193413120 unmapped: 54124544 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 470 ms_handle_reset con 0x55f515855000 session 0x55f51226f6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f5ea6000/0x0/0x4ffc00000, data 0x16e2234/0x1964000, compress 0x0/0x0/0x0, omap 0x82e83, meta 0x836d17d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 471 ms_handle_reset con 0x55f5122b7400 session 0x55f512435c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193462272 unmapped: 54075392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3267109 data_alloc: 218103808 data_used: 7189018
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 471 ms_handle_reset con 0x55f511a04400 session 0x55f512236380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 471 ms_handle_reset con 0x55f511a05c00 session 0x55f514a70380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190005248 unmapped: 57532416 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 472 ms_handle_reset con 0x55f512c3d800 session 0x55f512237500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 473 ms_handle_reset con 0x55f5124c3800 session 0x55f515269880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 473 ms_handle_reset con 0x55f511a04400 session 0x55f51226e000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 58515456 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 58515456 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 58515456 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f6e2a000/0x0/0x4ffc00000, data 0x75b6bf/0x9de000, compress 0x0/0x0/0x0, omap 0x847e3, meta 0x836b81d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 58515456 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3139906 data_alloc: 218103808 data_used: 108089
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 ms_handle_reset con 0x55f511a05c00 session 0x55f514a71880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 ms_handle_reset con 0x55f5122b7400 session 0x55f51318fc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 58515456 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 ms_handle_reset con 0x55f515855000 session 0x55f516707500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f6e2a000/0x0/0x4ffc00000, data 0x75d1a2/0x9e0000, compress 0x0/0x0/0x0, omap 0x84b73, meta 0x836b48d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 ms_handle_reset con 0x55f511a04400 session 0x55f516adcfc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 ms_handle_reset con 0x55f511a05c00 session 0x55f5153448c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 58499072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f6e2b000/0x0/0x4ffc00000, data 0x75d204/0x9e1000, compress 0x0/0x0/0x0, omap 0x84d0b, meta 0x836b2f5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.795515060s of 10.558468819s, submitted: 218
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 58499072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 475 ms_handle_reset con 0x55f5122b7400 session 0x55f512c59500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189046784 unmapped: 58490880 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 475 ms_handle_reset con 0x55f5124c3800 session 0x55f5147188c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 476 ms_handle_reset con 0x55f515855000 session 0x55f51318e1c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 58474496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3147526 data_alloc: 218103808 data_used: 108188
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 476 ms_handle_reset con 0x55f511a04400 session 0x55f515352fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 476 ms_handle_reset con 0x55f511a05c00 session 0x55f512435dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 58466304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 476 heartbeat osd_stat(store_statfs(0x4f6e23000/0x0/0x4ffc00000, data 0x760ae6/0x9e7000, compress 0x0/0x0/0x0, omap 0x856be, meta 0x836a942), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 58466304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 476 ms_handle_reset con 0x55f5122b7400 session 0x55f514a70fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189087744 unmapped: 58449920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 476 heartbeat osd_stat(store_statfs(0x4f6e25000/0x0/0x4ffc00000, data 0x760a94/0x9e7000, compress 0x0/0x0/0x0, omap 0x85746, meta 0x836a8ba), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189087744 unmapped: 58449920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 477 ms_handle_reset con 0x55f514a2cc00 session 0x55f511f44a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 477 ms_handle_reset con 0x55f5124c3800 session 0x55f514bfa540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190144512 unmapped: 57393152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151829 data_alloc: 218103808 data_used: 109442
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 477 ms_handle_reset con 0x55f511a04400 session 0x55f5147c6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 477 ms_handle_reset con 0x55f511a05c00 session 0x55f516c3e700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 478 ms_handle_reset con 0x55f5122b7400 session 0x55f514a701c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190169088 unmapped: 57368576 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 478 ms_handle_reset con 0x55f514a2cc00 session 0x55f5153521c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 478 ms_handle_reset con 0x55f514a2c400 session 0x55f516adce00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190169088 unmapped: 57368576 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 478 ms_handle_reset con 0x55f511a04400 session 0x55f516d96fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190177280 unmapped: 57360384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190177280 unmapped: 57360384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.974524498s of 11.261938095s, submitted: 83
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 479 ms_handle_reset con 0x55f511a05c00 session 0x55f51487d500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f6e1e000/0x0/0x4ffc00000, data 0x7642da/0x9ec000, compress 0x0/0x0/0x0, omap 0x85cac, meta 0x836a354), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190201856 unmapped: 57335808 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3158166 data_alloc: 218103808 data_used: 110027
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 479 ms_handle_reset con 0x55f5122b7400 session 0x55f511e94380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190251008 unmapped: 57286656 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 480 ms_handle_reset con 0x55f514a2c400 session 0x55f5124341c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 480 ms_handle_reset con 0x55f515853400 session 0x55f515352fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190259200 unmapped: 57278464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 480 ms_handle_reset con 0x55f511a04400 session 0x55f512236380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 480 ms_handle_reset con 0x55f514a2cc00 session 0x55f5143508c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f6dd7000/0x0/0x4ffc00000, data 0x7a7bcb/0xa33000, compress 0x0/0x0/0x0, omap 0x86b65, meta 0x836949b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190259200 unmapped: 57278464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 480 ms_handle_reset con 0x55f511a05c00 session 0x55f5174208c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190259200 unmapped: 57278464 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190275584 unmapped: 57262080 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3168081 data_alloc: 218103808 data_used: 114736
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 482 ms_handle_reset con 0x55f5122b7400 session 0x55f5199e8380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190291968 unmapped: 57245696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 482 ms_handle_reset con 0x55f514a2c400 session 0x55f5194696c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190300160 unmapped: 57237504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 483 ms_handle_reset con 0x55f511a04400 session 0x55f512435dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190300160 unmapped: 57237504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 483 ms_handle_reset con 0x55f511a05c00 session 0x55f516707500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 483 ms_handle_reset con 0x55f5122b7400 session 0x55f514a71880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f6dce000/0x0/0x4ffc00000, data 0x7acf13/0xa3c000, compress 0x0/0x0/0x0, omap 0x86fbb, meta 0x8369045), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190308352 unmapped: 57229312 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 483 handle_osd_map epochs [483,484], i have 483, src has [1,484]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.278504372s of 10.466550827s, submitted: 96
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 484 ms_handle_reset con 0x55f514a2cc00 session 0x55f5130e6700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f6dc9000/0x0/0x4ffc00000, data 0x7aea05/0xa3f000, compress 0x0/0x0/0x0, omap 0x87780, meta 0x8368880), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190341120 unmapped: 57196544 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180107 data_alloc: 218103808 data_used: 114736
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 484 ms_handle_reset con 0x55f512202c00 session 0x55f5122db6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189882368 unmapped: 57655296 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: mgrc ms_handle_reset ms_handle_reset con 0x55f51530fc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2608678704
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2608678704,v1:192.168.122.100:6801/2608678704]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: mgrc handle_mgr_configure stats_period=5
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 485 ms_handle_reset con 0x55f511a04400 session 0x55f515033c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188530688 unmapped: 59006976 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 485 ms_handle_reset con 0x55f5124c2c00 session 0x55f51318e000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 485 ms_handle_reset con 0x55f511a05c00 session 0x55f512c59500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 485 ms_handle_reset con 0x55f5171f8800 session 0x55f514350700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 486 ms_handle_reset con 0x55f514a2cc00 session 0x55f5153521c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 58982400 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 486 ms_handle_reset con 0x55f512c3c000 session 0x55f516addc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 486 ms_handle_reset con 0x55f515620c00 session 0x55f5147188c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188555264 unmapped: 58982400 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 486 ms_handle_reset con 0x55f511a04400 session 0x55f516adc8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 486 ms_handle_reset con 0x55f511a05c00 session 0x55f512c58380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193044480 unmapped: 54493184 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3295480 data_alloc: 218103808 data_used: 110640
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f6e02000/0x0/0x4ffc00000, data 0x773cea/0xa07000, compress 0x0/0x0/0x0, omap 0x883a1, meta 0x8367c5f), peers [0,2] op hist [0,0,0,0,0,0,1,1,0,2])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 193142784 unmapped: 54394880 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 487 ms_handle_reset con 0x55f512c3c000 session 0x55f514a70540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 487 ms_handle_reset con 0x55f514a2cc00 session 0x55f5167061c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188964864 unmapped: 58572800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 487 ms_handle_reset con 0x55f514a35c00 session 0x55f512360380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f3fbd000/0x0/0x4ffc00000, data 0x35bad4c/0x384f000, compress 0x0/0x0/0x0, omap 0x886f7, meta 0x8367909), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188964864 unmapped: 58572800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f3fbd000/0x0/0x4ffc00000, data 0x35bad4c/0x384f000, compress 0x0/0x0/0x0, omap 0x8877f, meta 0x8367881), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 488 ms_handle_reset con 0x55f511a04400 session 0x55f516d28fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 58630144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 488 handle_osd_map epochs [488,489], i have 488, src has [1,489]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 58630144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442035 data_alloc: 218103808 data_used: 114637
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f3fb5000/0x0/0x4ffc00000, data 0x35be431/0x3855000, compress 0x0/0x0/0x0, omap 0x8903d, meta 0x8366fc3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 489 ms_handle_reset con 0x55f511a05c00 session 0x55f512c58c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 489 ms_handle_reset con 0x55f512c3c000 session 0x55f516706c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 58630144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.847138405s of 12.422031403s, submitted: 175
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 490 ms_handle_reset con 0x55f514a2cc00 session 0x55f512435a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 58499072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 490 ms_handle_reset con 0x55f514a34000 session 0x55f514a701c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189046784 unmapped: 58490880 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 490 ms_handle_reset con 0x55f511a04400 session 0x55f51318fc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 58482688 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 491 handle_osd_map epochs [491,491], i have 491, src has [1,491]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 58474496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448676 data_alloc: 218103808 data_used: 115835
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 491 ms_handle_reset con 0x55f511a05c00 session 0x55f514a70380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 491 ms_handle_reset con 0x55f512c3c000 session 0x55f51087b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 491 ms_handle_reset con 0x55f514a2cc00 session 0x55f51226ec40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 58474496 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 491 heartbeat osd_stat(store_statfs(0x4f3fb1000/0x0/0x4ffc00000, data 0x35c1c87/0x385b000, compress 0x0/0x0/0x0, omap 0x89653, meta 0x83669ad), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 491 handle_osd_map epochs [491,492], i have 492, src has [1,492]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190111744 unmapped: 57425920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 ms_handle_reset con 0x55f5152ec000 session 0x55f514a70fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 heartbeat osd_stat(store_statfs(0x4f3fb1000/0x0/0x4ffc00000, data 0x35c1c87/0x385b000, compress 0x0/0x0/0x0, omap 0x897a7, meta 0x8366859), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 ms_handle_reset con 0x55f511a04400 session 0x55f516adc8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190111744 unmapped: 57425920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 ms_handle_reset con 0x55f511a05c00 session 0x55f512c59500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 ms_handle_reset con 0x55f512c3c000 session 0x55f5122db6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 ms_handle_reset con 0x55f514a2cc00 session 0x55f5199e8380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 ms_handle_reset con 0x55f514e3ac00 session 0x55f515033340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190144512 unmapped: 57393152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 493 ms_handle_reset con 0x55f512c3c000 session 0x55f5143c81c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190144512 unmapped: 57393152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456754 data_alloc: 218103808 data_used: 117012
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3fad000/0x0/0x4ffc00000, data 0x35c38de/0x385f000, compress 0x0/0x0/0x0, omap 0x8a2af, meta 0x8365d51), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3fa8000/0x0/0x4ffc00000, data 0x35c53d0/0x3862000, compress 0x0/0x0/0x0, omap 0x8a4ab, meta 0x8365b55), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 190144512 unmapped: 57393152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f3fa8000/0x0/0x4ffc00000, data 0x35c53d0/0x3862000, compress 0x0/0x0/0x0, omap 0x8a4ab, meta 0x8365b55), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 494 ms_handle_reset con 0x55f514a2cc00 session 0x55f5130e6a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 192577536 unmapped: 54960128 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 494 ms_handle_reset con 0x55f5122b6000 session 0x55f5122da540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.160481453s of 10.660478592s, submitted: 81
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3fa4000/0x0/0x4ffc00000, data 0x35c6fdf/0x3865000, compress 0x0/0x0/0x0, omap 0x8a597, meta 0x8365a69), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195567616 unmapped: 51970048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 494 ms_handle_reset con 0x55f515854000 session 0x55f514a71a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 494 heartbeat osd_stat(store_statfs(0x4f3fa4000/0x0/0x4ffc00000, data 0x35c6fdf/0x3865000, compress 0x0/0x0/0x0, omap 0x8a597, meta 0x8365a69), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195567616 unmapped: 51970048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195567616 unmapped: 51970048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3527998 data_alloc: 234881024 data_used: 11605268
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 496 ms_handle_reset con 0x55f51528dc00 session 0x55f516706c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 496 ms_handle_reset con 0x55f5122b6000 session 0x55f5124341c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195502080 unmapped: 52035584 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 496 ms_handle_reset con 0x55f512c3c000 session 0x55f512c7da40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195510272 unmapped: 52027392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195510272 unmapped: 52027392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 496 heartbeat osd_stat(store_statfs(0x4f3f9d000/0x0/0x4ffc00000, data 0x35ca78a/0x386d000, compress 0x0/0x0/0x0, omap 0x8af70, meta 0x8365090), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 496 handle_osd_map epochs [496,497], i have 497, src has [1,497]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 497 ms_handle_reset con 0x55f514a2cc00 session 0x55f5152c4e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195510272 unmapped: 52027392 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 497 ms_handle_reset con 0x55f515854000 session 0x55f516d96a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195641344 unmapped: 51896320 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538905 data_alloc: 234881024 data_used: 11605995
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 498 ms_handle_reset con 0x55f514a8ec00 session 0x55f511e94000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 51879936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 498 ms_handle_reset con 0x55f5122b6000 session 0x55f5130cc700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 206004224 unmapped: 41533440 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 498 ms_handle_reset con 0x55f512c3c000 session 0x55f5199e8a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 498 heartbeat osd_stat(store_statfs(0x4f3e87000/0x0/0x4ffc00000, data 0x35cdfe0/0x3873000, compress 0x0/0x0/0x0, omap 0x8b14a, meta 0x8364eb6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.685846329s of 10.188584328s, submitted: 129
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 206004224 unmapped: 41533440 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 498 heartbeat osd_stat(store_statfs(0x4f3a58000/0x0/0x4ffc00000, data 0x3b10f6e/0x3db4000, compress 0x0/0x0/0x0, omap 0x8b1d2, meta 0x8364e2e), peers [0,2] op hist [0,0,0,1])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205299712 unmapped: 42237952 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 499 heartbeat osd_stat(store_statfs(0x4f38e0000/0x0/0x4ffc00000, data 0x3c55a60/0x3efa000, compress 0x0/0x0/0x0, omap 0x8b347, meta 0x8364cb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 42180608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3590521 data_alloc: 234881024 data_used: 12517241
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 42180608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 42180608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 499 ms_handle_reset con 0x55f514a2cc00 session 0x55f516707500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 42180608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205357056 unmapped: 42180608 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 500 ms_handle_reset con 0x55f515854000 session 0x55f512435c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205406208 unmapped: 42131456 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3589519 data_alloc: 234881024 data_used: 12521337
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 500 ms_handle_reset con 0x55f5142c3800 session 0x55f515353c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f390d000/0x0/0x4ffc00000, data 0x3c5768b/0x3efd000, compress 0x0/0x0/0x0, omap 0x8bbb1, meta 0x836444f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205438976 unmapped: 42098688 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 500 ms_handle_reset con 0x55f511a04400 session 0x55f514a98c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 500 ms_handle_reset con 0x55f511a05c00 session 0x55f511e8b6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 500 ms_handle_reset con 0x55f5122b6000 session 0x55f516706700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 501 ms_handle_reset con 0x55f512c3c000 session 0x55f512435a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 501 ms_handle_reset con 0x55f5142c3800 session 0x55f5199e9c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205455360 unmapped: 42082304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 501 ms_handle_reset con 0x55f511a04400 session 0x55f512c7c8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 501 ms_handle_reset con 0x55f5142c4000 session 0x55f512332c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205455360 unmapped: 42082304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.445058823s of 10.776207924s, submitted: 161
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 501 ms_handle_reset con 0x55f5142c4000 session 0x55f5174208c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205455360 unmapped: 42082304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 501 heartbeat osd_stat(store_statfs(0x4f390c000/0x0/0x4ffc00000, data 0x3c592d2/0x3f00000, compress 0x0/0x0/0x0, omap 0x8baf5, meta 0x836450b), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205455360 unmapped: 42082304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3592532 data_alloc: 234881024 data_used: 12521950
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 502 ms_handle_reset con 0x55f5122b6000 session 0x55f514a98fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 502 ms_handle_reset con 0x55f512c3c000 session 0x55f512c58c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205463552 unmapped: 42074112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 502 ms_handle_reset con 0x55f514a2cc00 session 0x55f512237a40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 502 ms_handle_reset con 0x55f5122b6000 session 0x55f51487c380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 502 handle_osd_map epochs [502,503], i have 503, src has [1,503]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 503 ms_handle_reset con 0x55f511a04400 session 0x55f516c3f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205479936 unmapped: 42057728 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 503 ms_handle_reset con 0x55f512c3c000 session 0x55f516adc380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 503 ms_handle_reset con 0x55f5142c4000 session 0x55f519469dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205479936 unmapped: 42057728 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 503 ms_handle_reset con 0x55f515854000 session 0x55f514350700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205479936 unmapped: 42057728 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 205488128 unmapped: 42049536 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3612542 data_alloc: 234881024 data_used: 12522041
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 503 heartbeat osd_stat(store_statfs(0x4f3904000/0x0/0x4ffc00000, data 0x3c5cb7a/0x3f06000, compress 0x0/0x0/0x0, omap 0x8c9e3, meta 0x836361d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 504 ms_handle_reset con 0x55f511a04400 session 0x55f51318e000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 206536704 unmapped: 41000960 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 504 ms_handle_reset con 0x55f5122b6000 session 0x55f512236000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 504 handle_osd_map epochs [505,505], i have 504, src has [1,505]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 505 ms_handle_reset con 0x55f512c3c000 session 0x55f515033c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 206561280 unmapped: 40976384 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 505 ms_handle_reset con 0x55f5142c4000 session 0x55f514a70e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 505 ms_handle_reset con 0x55f515854000 session 0x55f517421dc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 207683584 unmapped: 39854080 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 505 heartbeat osd_stat(store_statfs(0x4f38fb000/0x0/0x4ffc00000, data 0x3c603e0/0x3f0c000, compress 0x0/0x0/0x0, omap 0x8d5df, meta 0x8362a21), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 207683584 unmapped: 39854080 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.979054451s of 10.597697258s, submitted: 128
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 505 ms_handle_reset con 0x55f511a04400 session 0x55f511f55500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 207683584 unmapped: 39854080 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620173 data_alloc: 234881024 data_used: 12523141
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 506 ms_handle_reset con 0x55f5122b6000 session 0x55f516addc00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 506 heartbeat osd_stat(store_statfs(0x4f38ff000/0x0/0x4ffc00000, data 0x3c603f0/0x3f0d000, compress 0x0/0x0/0x0, omap 0x8d8a1, meta 0x836275f), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 ms_handle_reset con 0x55f512c3c000 session 0x55f511eaa700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 heartbeat osd_stat(store_statfs(0x4f38f5000/0x0/0x4ffc00000, data 0x3c63af1/0x3f13000, compress 0x0/0x0/0x0, omap 0x8daff, meta 0x8362501), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 207699968 unmapped: 39837696 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 ms_handle_reset con 0x55f5142c4000 session 0x55f5122361c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 ms_handle_reset con 0x55f51530f800 session 0x55f512237880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 heartbeat osd_stat(store_statfs(0x4f38f5000/0x0/0x4ffc00000, data 0x3c63af1/0x3f13000, compress 0x0/0x0/0x0, omap 0x8daff, meta 0x8362501), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 207708160 unmapped: 39829504 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 ms_handle_reset con 0x55f5122b6000 session 0x55f5147c6fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 ms_handle_reset con 0x55f512c3c000 session 0x55f516706c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 507 handle_osd_map epochs [508,508], i have 507, src has [1,508]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 508 ms_handle_reset con 0x55f511a04400 session 0x55f512c58c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 508 ms_handle_reset con 0x55f5142c4000 session 0x55f5199e8380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208109568 unmapped: 39428096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 508 ms_handle_reset con 0x55f515859000 session 0x55f5122da540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208109568 unmapped: 39428096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 508 ms_handle_reset con 0x55f511a04400 session 0x55f514719180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208109568 unmapped: 39428096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3636379 data_alloc: 234881024 data_used: 12524213
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208109568 unmapped: 39428096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 509 ms_handle_reset con 0x55f5122b6000 session 0x55f5142c0a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208166912 unmapped: 39370752 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 509 heartbeat osd_stat(store_statfs(0x4f38d2000/0x0/0x4ffc00000, data 0x3c896e6/0x3f3a000, compress 0x0/0x0/0x0, omap 0x8e83b, meta 0x83617c5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208166912 unmapped: 39370752 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 509 ms_handle_reset con 0x55f512c3c000 session 0x55f514a716c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 509 heartbeat osd_stat(store_statfs(0x4f38cd000/0x0/0x4ffc00000, data 0x3c8b2f5/0x3f3d000, compress 0x0/0x0/0x0, omap 0x8e9f3, meta 0x836160d), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 509 handle_osd_map epochs [510,510], i have 509, src has [1,510]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 510 ms_handle_reset con 0x55f5142c4000 session 0x55f51226f500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.679837227s of 10.017495155s, submitted: 104
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208191488 unmapped: 39346176 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 511 ms_handle_reset con 0x55f5152ecc00 session 0x55f515345c00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208478208 unmapped: 39059456 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3646220 data_alloc: 234881024 data_used: 12684469
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 511 ms_handle_reset con 0x55f511a04400 session 0x55f512236380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208494592 unmapped: 39043072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208494592 unmapped: 39043072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208494592 unmapped: 39043072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 511 heartbeat osd_stat(store_statfs(0x4f38c8000/0x0/0x4ffc00000, data 0x3c8ea1e/0x3f42000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208494592 unmapped: 39043072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208494592 unmapped: 39043072 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3648394 data_alloc: 234881024 data_used: 12685082
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f38c8000/0x0/0x4ffc00000, data 0x3c8ea1e/0x3f42000, compress 0x0/0x0/0x0, omap 0x8f11b, meta 0x8360ee5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 208510976 unmapped: 39026688 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 210575360 unmapped: 36962304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 214564864 unmapped: 32972800 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 32940032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 32940032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679254 data_alloc: 234881024 data_used: 20562040
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 32940032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f38c7000/0x0/0x4ffc00000, data 0x3c904f4/0x3f45000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f38c7000/0x0/0x4ffc00000, data 0x3c904f4/0x3f45000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 32940032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f38c7000/0x0/0x4ffc00000, data 0x3c904f4/0x3f45000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 214597632 unmapped: 32940032 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.616815567s of 14.641700745s, submitted: 51
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709474 data_alloc: 234881024 data_used: 24756344
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f34c7000/0x0/0x4ffc00000, data 0x40904f4/0x4345000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709474 data_alloc: 234881024 data_used: 24756344
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f34c7000/0x0/0x4ffc00000, data 0x40904f4/0x4345000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.696559906s of 10.723273277s, submitted: 5
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3709618 data_alloc: 234881024 data_used: 24756344
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 219373568 unmapped: 28164096 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223240192 unmapped: 24297472 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f33c0000/0x0/0x4ffc00000, data 0x41974f4/0x444c000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 24289280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 24289280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 24289280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3724422 data_alloc: 251658240 data_used: 28840056
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 24289280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223248384 unmapped: 24289280 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f33c0000/0x0/0x4ffc00000, data 0x41974f4/0x444c000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f33c0000/0x0/0x4ffc00000, data 0x41974f4/0x444c000, compress 0x0/0x0/0x0, omap 0x8fa1d, meta 0x83605e3), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f514c92c00 session 0x55f51ebe8000
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f515338c00 session 0x55f519468540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223305728 unmapped: 24231936 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f5122b6000 session 0x55f5174201c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223363072 unmapped: 24174592 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f33e5000/0x0/0x4ffc00000, data 0x41734e4/0x4427000, compress 0x0/0x0/0x0, omap 0x8f96b, meta 0x8360695), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223363072 unmapped: 24174592 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3718069 data_alloc: 251658240 data_used: 29370488
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223379456 unmapped: 24158208 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.499095917s of 12.642436028s, submitted: 27
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f512c3c000 session 0x55f516c3e8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223199232 unmapped: 24338432 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f511a04400 session 0x55f51487ddc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f5122b6000 session 0x55f512236700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f512c3c000 session 0x55f5174208c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223215616 unmapped: 24322048 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f514c92c00 session 0x55f512360540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f515338c00 session 0x55f5167068c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f511a04400 session 0x55f511f60700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f5122b6000 session 0x55f5199e9180
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f512c3c000 session 0x55f514d4f340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f514c92c00 session 0x55f512c641c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f5142c4000 session 0x55f5122361c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 222683136 unmapped: 24854528 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f3394000/0x0/0x4ffc00000, data 0x41c3546/0x4478000, compress 0x0/0x0/0x0, omap 0x90337, meta 0x835fcc9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f511a04400 session 0x55f5142c0a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f5122b6000 session 0x55f512236700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 222699520 unmapped: 24838144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3730493 data_alloc: 251658240 data_used: 29370586
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 222699520 unmapped: 24838144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 222699520 unmapped: 24838144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 222699520 unmapped: 24838144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f3394000/0x0/0x4ffc00000, data 0x41c3546/0x4478000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 222699520 unmapped: 24838144 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f3394000/0x0/0x4ffc00000, data 0x41c3546/0x4478000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f512c3c000 session 0x55f516d29500
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223051776 unmapped: 24485888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3732739 data_alloc: 251658240 data_used: 29370586
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 223051776 unmapped: 24485888 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 227991552 unmapped: 19546112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 227991552 unmapped: 19546112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 227991552 unmapped: 19546112 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f336a000/0x0/0x4ffc00000, data 0x41ed546/0x44a2000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 228032512 unmapped: 19505152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3765895 data_alloc: 251658240 data_used: 34885850
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 228032512 unmapped: 19505152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 228032512 unmapped: 19505152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 228032512 unmapped: 19505152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 228032512 unmapped: 19505152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 228032512 unmapped: 19505152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3765895 data_alloc: 251658240 data_used: 34885850
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f336a000/0x0/0x4ffc00000, data 0x41ed546/0x44a2000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f336a000/0x0/0x4ffc00000, data 0x41ed546/0x44a2000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.807481766s of 19.132957458s, submitted: 69
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 228032512 unmapped: 19505152 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f336a000/0x0/0x4ffc00000, data 0x41ed546/0x44a2000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232357888 unmapped: 15179776 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232488960 unmapped: 15048704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232488960 unmapped: 15048704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232488960 unmapped: 15048704 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3848039 data_alloc: 251658240 data_used: 36225242
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f2777000/0x0/0x4ffc00000, data 0x4de0546/0x5095000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f2777000/0x0/0x4ffc00000, data 0x4de0546/0x5095000, compress 0x0/0x0/0x0, omap 0x90447, meta 0x835fbb9), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3845407 data_alloc: 251658240 data_used: 36229338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.894756317s of 10.470714569s, submitted: 117
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 ms_handle_reset con 0x55f515161400 session 0x55f512435340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f2774000/0x0/0x4ffc00000, data 0x4de3546/0x5098000, compress 0x0/0x0/0x0, omap 0x904cf, meta 0x835fb31), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3845407 data_alloc: 251658240 data_used: 36229338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232570880 unmapped: 14966784 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232595456 unmapped: 14942208 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 513 ms_handle_reset con 0x55f514e3b800 session 0x55f514a716c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f26ea000/0x0/0x4ffc00000, data 0x4e6c546/0x5121000, compress 0x0/0x0/0x0, omap 0x904cf, meta 0x835fb31), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232595456 unmapped: 14942208 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232636416 unmapped: 14901248 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872613 data_alloc: 251658240 data_used: 36237530
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f26ca000/0x0/0x4ffc00000, data 0x4f0e19b/0x513f000, compress 0x0/0x0/0x0, omap 0x9063b, meta 0x835f9c5), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 513 handle_osd_map epochs [514,514], i have 513, src has [1,514]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 514 ms_handle_reset con 0x55f511a04400 session 0x55f514350380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232669184 unmapped: 14868480 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.164289474s of 10.238554955s, submitted: 24
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 514 heartbeat osd_stat(store_statfs(0x4f26c1000/0x0/0x4ffc00000, data 0x4f9ad8e/0x5149000, compress 0x0/0x0/0x0, omap 0x90b46, meta 0x835f4ba), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 514 handle_osd_map epochs [515,515], i have 514, src has [1,515]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5122b6000 session 0x55f516d961c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5165bdc00 session 0x55f514a70540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3889037 data_alloc: 251658240 data_used: 36253914
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f515161400 session 0x55f514a70e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f512c3c000 session 0x55f512c58380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f269d000/0x0/0x4ffc00000, data 0x4fb89e3/0x516a000, compress 0x0/0x0/0x0, omap 0x9142a, meta 0x835ebd6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 232873984 unmapped: 14663680 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3887781 data_alloc: 251658240 data_used: 36266300
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234217472 unmapped: 13320192 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f511a04400 session 0x55f512434a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234217472 unmapped: 13320192 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f2619000/0x0/0x4ffc00000, data 0x50409e3/0x51f2000, compress 0x0/0x0/0x0, omap 0x9142a, meta 0x835ebd6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.734242439s of 10.841250420s, submitted: 19
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234266624 unmapped: 13271040 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f515161400 session 0x55f511eaa700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5122b6000 session 0x55f515352c40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f2618000/0x0/0x4ffc00000, data 0x50419e3/0x51f3000, compress 0x0/0x0/0x0, omap 0x9142a, meta 0x835ebd6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233725952 unmapped: 13811712 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233725952 unmapped: 13811712 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3900333 data_alloc: 251658240 data_used: 36806972
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233725952 unmapped: 13811712 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233725952 unmapped: 13811712 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233742336 unmapped: 13795328 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f2615000/0x0/0x4ffc00000, data 0x5041a55/0x51f5000, compress 0x0/0x0/0x0, omap 0x914b2, meta 0x835eb4e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233750528 unmapped: 13787136 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5165bdc00 session 0x55f512c1a700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233750528 unmapped: 13787136 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3902532 data_alloc: 251658240 data_used: 37466428
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f2616000/0x0/0x4ffc00000, data 0x5042a55/0x51f6000, compress 0x0/0x0/0x0, omap 0x914b2, meta 0x835eb4e), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f515855800 session 0x55f512237880
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5152ec800 session 0x55f51087b340
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233783296 unmapped: 13754368 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233783296 unmapped: 13754368 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f2616000/0x0/0x4ffc00000, data 0x5042a55/0x51f6000, compress 0x0/0x0/0x0, omap 0x9153a, meta 0x835eac6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233783296 unmapped: 13754368 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233783296 unmapped: 13754368 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f2616000/0x0/0x4ffc00000, data 0x5042a55/0x51f6000, compress 0x0/0x0/0x0, omap 0x9153a, meta 0x835eac6), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 233783296 unmapped: 13754368 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3903300 data_alloc: 251658240 data_used: 37478814
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234127360 unmapped: 13410304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f511a04400 session 0x55f519482540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234127360 unmapped: 13410304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.066977501s of 15.104057312s, submitted: 14
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234127360 unmapped: 13410304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5122b6000 session 0x55f519468540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f2614000/0x0/0x4ffc00000, data 0x5044a55/0x51f8000, compress 0x0/0x0/0x0, omap 0x9157e, meta 0x835ea82), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f515161400 session 0x55f512434fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234127360 unmapped: 13410304 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f515855800 session 0x55f512c7d6c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f511a04400 session 0x55f51226e8c0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234143744 unmapped: 13393920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3904292 data_alloc: 251658240 data_used: 38109598
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5122b6000 session 0x55f516c3fa40
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234143744 unmapped: 13393920 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f515161400 session 0x55f51226fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234151936 unmapped: 13385728 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 ms_handle_reset con 0x55f5152ec800 session 0x55f514350700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 515 handle_osd_map epochs [516,516], i have 515, src has [1,516]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 516 ms_handle_reset con 0x55f5165bdc00 session 0x55f5199e8700
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 516 heartbeat osd_stat(store_statfs(0x4f269e000/0x0/0x4ffc00000, data 0x4fbd981/0x516e000, compress 0x0/0x0/0x0, omap 0x91716, meta 0x835e8ea), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234151936 unmapped: 13385728 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 516 ms_handle_reset con 0x55f511a04400 session 0x55f51226fdc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 516 ms_handle_reset con 0x55f5122b6000 session 0x55f515268e00
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234160128 unmapped: 13377536 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 516 handle_osd_map epochs [517,517], i have 516, src has [1,517]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 517 ms_handle_reset con 0x55f515161400 session 0x55f512236a80
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234160128 unmapped: 13377536 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3895253 data_alloc: 251658240 data_used: 37556344
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 517 handle_osd_map epochs [518,518], i have 517, src has [1,518]
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234168320 unmapped: 13369344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 518 ms_handle_reset con 0x55f5152ec800 session 0x55f5199e8380
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f269b000/0x0/0x4ffc00000, data 0x4f391ad/0x516f000, compress 0x0/0x0/0x0, omap 0x919f4, meta 0x835e60c), peers [0,2] op hist [])
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: prioritycache tune_memory target: 4294967296 mapped: 234168320 unmapped: 13369344 heap: 247537664 old mem: 2845415832 new mem: 2845415832
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 518 ms_handle_reset con 0x55f515857800 session 0x55f512434fc0
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.770185471s of 10.088986397s, submitted: 77
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 518 ms_handle_reset con 0x55f511a04400 session 0x55f516c3e540
Jan 29 12:40:30 np0005601226 ceph-osd[86917]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f269e000/0x0/0x4ffc00000, data 0x4eb1df4/0x516e000, compress 0x0/0x0/0x0, omap 0x91f8f, meta 0x835e071), peers [0,2] op hist [])
